├── .github └── workflows │ └── publish.yml ├── LICENSE ├── README.md ├── __init__.py ├── pyproject.toml ├── requirements.txt └── scripts └── GeekyRembv2.py /.github/workflows/publish.yml: -------------------------------------------------------------------------------- 1 | name: Publish to Comfy registry 2 | on: 3 | workflow_dispatch: 4 | push: 5 | branches: 6 | - main 7 | - master 8 | paths: 9 | - "pyproject.toml" 10 | 11 | permissions: 12 | issues: write 13 | 14 | jobs: 15 | publish-node: 16 | name: Publish Custom Node to registry 17 | runs-on: ubuntu-latest 18 | if: ${{ github.repository_owner == 'GeekyGhost' }} 19 | steps: 20 | - name: Check out code 21 | uses: actions/checkout@v4 22 | - name: Publish Custom Node 23 | uses: Comfy-Org/publish-node-action@v1 24 | with: 25 | ## Add your own personal access token to your Github Repository secrets and reference it here. 26 | personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }} 27 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 GeekyGhost 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # GeekyRemB: Advanced Background Removal & Image Processing Node Suite for ComfyUI 2 | 3 | GeekyRemB is a sophisticated image processing node suite that brings professional-grade background removal, blending, animation capabilities, and lighting effects to ComfyUI. It combines AI-powered processing with traditional image manipulation techniques to offer a comprehensive solution for complex image processing tasks. 4 | 5 | Screenshot 2025-03-12 070830 6 | 7 | 8 | ## Table of Contents 9 | 1. [Installation](#installation) 10 | 2. [Node Overview](#node-overview) 11 | 3. [Core Node: Geeky RemB](#core-node-geeky-remb) 12 | 4. [Animation Node: Geeky RemB Animator](#animation-node-geeky-remb-animator) 13 | 5. [Lighting & Shadow Node: Geeky RemB Light & Shadow](#lighting--shadow-node-geeky-remb-light--shadow) 14 | 6. [Keyframe Position Node: Geeky RemB Keyframe Position](#keyframe-position-node-geeky-remb-keyframe-position) 15 | 7. [Workflow Examples](#workflow-examples) 16 | 8. [Advanced Features](#advanced-features) 17 | 9. [Troubleshooting](#troubleshooting) 18 | 10. [License](#license) 19 | 11. [Acknowledgements](#acknowledgements) 20 | 21 | --- 22 | 23 | ## Installation 24 | 25 | 1. **Install ComfyUI** if you haven't already. Follow the [ComfyUI installation guide](https://github.com/comfyanonymous/ComfyUI) for detailed instructions. 26 | 27 | 2. **Clone the GeekyRemB repository** into your ComfyUI custom nodes directory: 28 | ```bash 29 | cd ComfyUI/custom_nodes 30 | git clone https://github.com/GeekyGhost/ComfyUI-GeekyRemB.git 31 | ``` 32 | 33 | 3. **Install dependencies**: 34 | ```bash 35 | cd ComfyUI-GeekyRemB 36 | pip install -r requirements.txt 37 | ``` 38 | 39 | 4. **Restart ComfyUI** to load the new nodes. 40 | 41 | --- 42 | 43 | ## Node Overview 44 | 45 | GeekyRemB consists of four interconnected nodes that work together to provide a complete image processing system: 46 | 47 | 1. **Geeky RemB** - The core node handling background removal, mask processing, and composition 48 | 2. **Geeky RemB Animator** - Provides animation parameters for creating dynamic sequences 49 | 3. **Geeky RemB Light & Shadow** - Controls lighting effects and shadow generation 50 | 4. **Geeky RemB Keyframe Position** - Enables precise control through keyframe-based animation 51 | 52 | These nodes can be used independently or connected together for advanced workflows. 53 | 54 | --- 55 | 56 | ## Core Node: Geeky RemB 57 | 58 | The main node responsible for background removal, image processing, and composition. 59 | 60 | ### Key Features 61 | 62 | - **AI-Powered Background Removal** 63 | - Multiple rembg models optimized for different subjects 64 | - Professional chroma keying with color selection and tolerance control 65 | 66 | - **Advanced Mask Processing** 67 | - Mask expansion/contraction 68 | - Edge detection and refinement 69 | - Blur and threshold controls 70 | - Small region removal 71 | 72 | - **Image Composition** 73 | - Position control for foreground elements 74 | - Aspect ratio and scaling options 75 | - Alpha channel management 76 | 77 | - **Animation Support** 78 | - Multi-frame processing 79 | - Integration with animation and lighting nodes 80 | 81 | ### Usage Instructions 82 | 83 | 1. **Basic Background Removal**: 84 | - Connect your input image to the `foreground` input 85 | - Set `enable_background_removal` to true 86 | - Choose your removal method (`rembg` or `chroma_key`) 87 | - For `rembg`, select an appropriate model for your subject 88 | - For `chroma_key`, select the key color and adjust tolerance 89 | 90 | 2. **Mask Refinement**: 91 | - Use `mask_expansion` to grow or shrink the mask (positive values expand, negative contract) 92 | - Enable `edge_detection` and adjust `edge_thickness` for sharper outlines 93 | - Use `mask_blur` to smooth edges 94 | - Enable `remove_small_regions` and set `small_region_size` to clean up noise 95 | 96 | 3. **Advanced Composition**: 97 | - Connect a background image to the `background` input 98 | - Set `x_position` and `y_position` to place the foreground 99 | - Adjust `scale` to resize the foreground 100 | - Specify `aspect_ratio` for consistent dimensions 101 | 102 | 4. **Using Alpha Matting**: 103 | - Enable `alpha_matting` for high-quality edge refinement 104 | - Adjust `alpha_matting_foreground_threshold` and `alpha_matting_background_threshold` for fine control 105 | - Useful for subjects with fine details like hair or fur 106 | 107 | 5. **Animation Setup**: 108 | - Set `frames` to the desired number of output frames 109 | - Connect an animator node to the `animator` input 110 | - Connect a light & shadow node to the `lightshadow` input 111 | - Connect keyframe nodes to the `keyframe` input for precise control 112 | 113 | ### Parameters Guide 114 | 115 | #### Essential Parameters 116 | - **output_format**: Choose between RGBA (with transparency) or RGB output 117 | - **foreground**: The input image to process 118 | - **enable_background_removal**: Toggle background removal processing 119 | - **removal_method**: Choose between AI-based (`rembg`) or color-based (`chroma_key`) 120 | - **model**: Select AI model for the `rembg` method 121 | - **chroma_key_color**: Select the color to key out (`green`, `blue`, or `red`) 122 | - **chroma_key_tolerance**: Adjust sensitivity of color keying (0.0-1.0) 123 | - **spill_reduction**: Remove color spill from edges (0.0-1.0) 124 | 125 | #### Mask Processing Parameters 126 | - **mask_expansion**: Expand or contract the mask (-100 to 100) 127 | - **edge_detection**: Enable edge detection for sharper outlines 128 | - **edge_thickness**: Control thickness of detected edges (1-10) 129 | - **mask_blur**: Apply blur to mask edges (0-100) 130 | - **threshold**: Set the threshold for mask generation (0.0-1.0) 131 | - **invert_generated_mask**: Invert the mask (switch foreground/background) 132 | - **remove_small_regions**: Remove small artifacts from the mask 133 | - **small_region_size**: Set minimum size of regions to keep (1-1000) 134 | 135 | #### Composition Parameters 136 | - **aspect_ratio**: Set output aspect ratio (e.g., "16:9", "4:3", "1:1", "portrait", "landscape") 137 | - **scale**: Scale the foreground (0.1-5.0) 138 | - **frames**: Number of frames to generate (1-1000) 139 | - **x_position**: Horizontal position of foreground (-2048 to 2048) 140 | - **y_position**: Vertical position of foreground (-2048 to 2048) 141 | 142 | #### Advanced Parameters 143 | - **alpha_matting**: Enable advanced edge refinement 144 | - **alpha_matting_foreground_threshold**: Threshold for foreground detection (0-255) 145 | - **alpha_matting_background_threshold**: Threshold for background detection (0-255) 146 | - **edge_refinement**: Enable additional edge refinement for chroma key 147 | 148 | #### Optional Inputs 149 | - **background**: Secondary image for background composition 150 | - **additional_mask**: Extra mask for complex selections 151 | - **invert_additional_mask**: Invert the additional mask 152 | - **animator**: Connection to GeekyRemB_Animator node 153 | - **lightshadow**: Connection to GeekyRemB_LightShadow node 154 | - **keyframe**: Connection to GeekyRemB_KeyframePosition node 155 | 156 | --- 157 | 158 | ## Animation Node: Geeky RemB Animator 159 | 160 | Provides animation parameters to the main GeekyRemB node, controlling movement, transitions, and timing. 161 | 162 | ### Key Features 163 | 164 | - **Multiple Animation Types**: 165 | - Movement animations (bounce, travel, slide) 166 | - Transform animations (scale, rotate, flip) 167 | - Opacity animations (fade in/out) 168 | - Special effects (shake, wave, pulse) 169 | 170 | - **Animation Control**: 171 | - Speed and duration settings 172 | - Repeat and reverse options 173 | - Easing functions for natural motion 174 | 175 | - **Keyframe Support**: 176 | - Integration with keyframe position nodes 177 | - Frame-accurate positioning 178 | 179 | ### Usage Instructions 180 | 181 | 1. **Basic Animation Setup**: 182 | - Add the Geeky RemB Animator node to your workflow 183 | - Select the desired `animation_type` 184 | - Set `animation_speed` and `animation_duration` 185 | - Connect the node's output to the main GeekyRemB node's `animator` input 186 | 187 | 2. **Animation Timing**: 188 | - Set `fps` to control the smoothness of animation 189 | - Adjust `repeats` to loop the animation 190 | - Enable `reverse` to alternate direction on repeats 191 | - Set `delay` to postpone the start of animation 192 | 193 | 3. **Motion Control**: 194 | - Set the initial position with `x_position` and `y_position` 195 | - Adjust `scale` and `rotation` for the starting state 196 | - Use `steps` to create multi-step animations 197 | - Set `phase_shift` for staggered animations 198 | 199 | 4. **Keyframe Animation**: 200 | - Enable `use_keyframes` to use keyframe-based animation 201 | - Connect keyframe position nodes to the `keyframe1` through `keyframe5` inputs 202 | - Set `easing_function` to control the interpolation between keyframes 203 | 204 | ### Parameters Guide 205 | 206 | #### Animation Type 207 | - **animation_type**: Choose from various animation types: 208 | - `none`: No animation 209 | - `bounce`: Up and down movement 210 | - `travel_left`/`travel_right`: Horizontal movement 211 | - `rotate`: Continuous rotation 212 | - `fade_in`/`fade_out`: Opacity transitions 213 | - `zoom_in`/`zoom_out`: Scale transitions 214 | - `scale_bounce`: Pulsing size changes 215 | - `spiral`: Combined rotation and movement 216 | - `shake`: Quick oscillating movements 217 | - `slide_up`/`slide_down`: Vertical movement 218 | - `flip_horizontal`/`flip_vertical`: Mirroring effects 219 | - `wave`: Sinusoidal movement 220 | - `pulse`: Periodic scaling 221 | - `swing`: Pendulum-like rotation 222 | - `spin`: Continuous spinning 223 | - `flash`: Brightness fluctuation 224 | 225 | #### Timing Controls 226 | - **animation_speed**: Rate of animation (0.1-10.0) 227 | - **animation_duration**: Length of one animation cycle (0.1-10.0) 228 | - **repeats**: Number of times to repeat the animation (1-10) 229 | - **reverse**: Toggle direction reversal on repeats 230 | - **fps**: Frames per second (1-120) 231 | - **delay**: Wait time before animation starts (0.0-5.0) 232 | 233 | #### Motion Parameters 234 | - **x_position**: Initial horizontal position (-1000 to 1000) 235 | - **y_position**: Initial vertical position (-1000 to 1000) 236 | - **scale**: Initial scale factor (0.1-5.0) 237 | - **rotation**: Initial rotation angle (-360.0 to 360.0) 238 | - **steps**: Number of steps in multi-step animations (1-10) 239 | - **phase_shift**: Phase shift for staggered animations (0.0-1.0) 240 | 241 | #### Easing Functions 242 | - **easing_function**: Controls how animations accelerate/decelerate: 243 | - `linear`: Constant speed 244 | - `ease_in_quad`/`ease_out_quad`/`ease_in_out_quad`: Quadratic easing 245 | - `ease_in_cubic`/`ease_out_cubic`/`ease_in_out_cubic`: Cubic easing 246 | - `ease_in_sine`/`ease_out_sine`/`ease_in_out_sine`: Sinusoidal easing 247 | - `ease_in_expo`/`ease_out_expo`/`ease_in_out_expo`: Exponential easing 248 | - `ease_in_bounce`/`ease_out_bounce`/`ease_in_out_bounce`: Bouncy easing 249 | 250 | #### Keyframe Controls 251 | - **use_keyframes**: Enable keyframe-based animation 252 | - **keyframe1** through **keyframe5**: Connect to keyframe position nodes 253 | 254 | --- 255 | 256 | ## Lighting & Shadow Node: Geeky RemB Light & Shadow 257 | 258 | Controls lighting effects and shadow generation for the processed images. 259 | 260 | ### Key Features 261 | 262 | - **Realistic Lighting Effects**: 263 | - Directional lighting with intensity control 264 | - Light color and falloff adjustment 265 | - Normal mapping for 3D-like lighting 266 | - Specular highlights for reflective surfaces 267 | 268 | - **Dynamic Shadow Generation**: 269 | - Shadow opacity and blur control 270 | - Shadow direction and color customization 271 | - Perspective shadows for realism 272 | - Distance-based shadow fading 273 | 274 | ### Usage Instructions 275 | 276 | 1. **Basic Lighting Setup**: 277 | - Add the Geeky RemB Light & Shadow node to your workflow 278 | - Enable `enable_lighting` to activate lighting effects 279 | - Adjust `light_intensity` to control strength 280 | - Set `light_direction_x` and `light_direction_y` to position the light source 281 | - Connect the node's output to the main GeekyRemB node's `lightshadow` input 282 | 283 | 2. **Light Customization**: 284 | - Choose between RGB color control or Kelvin temperature 285 | - Enable `use_kelvin_temperature` and set `kelvin_temperature` for natural lighting 286 | - Or adjust `light_color_r`, `light_color_g`, and `light_color_b` for custom colors 287 | - Set `light_radius` and `light_falloff` to control illumination area 288 | 289 | 3. **Advanced Lighting**: 290 | - Enable `enable_normal_mapping` for 3D-like lighting effects 291 | - Turn on `enable_specular` and adjust `specular_intensity` for highlights 292 | - Set `specular_shininess` to control highlight sharpness 293 | - Adjust `ambient_light` for global illumination level 294 | 295 | 4. **Shadow Configuration**: 296 | - Enable `enable_shadow` to activate shadow generation 297 | - Set `shadow_opacity` and `shadow_blur` for shadow appearance 298 | - Adjust `shadow_direction_x` and `shadow_direction_y` for shadow placement 299 | - Customize `shadow_color_r`, `shadow_color_g`, and `shadow_color_b` 300 | 301 | 5. **Realistic Shadows**: 302 | - Enable `perspective_shadow` for distance-based perspective effects 303 | - Set `light_source_height` to control shadow length 304 | - Turn on `distance_fade` and adjust `fade_distance` for natural fading 305 | - Toggle `soft_edges` for realistic shadow edges 306 | 307 | ### Parameters Guide 308 | 309 | #### Lighting Controls 310 | - **enable_lighting**: Toggle lighting effects 311 | - **light_intensity**: Strength of lighting effect (0.0-1.0) 312 | - **light_direction_x**: Horizontal light direction (-200 to 200) 313 | - **light_direction_y**: Vertical light direction (-200 to 200) 314 | - **light_radius**: Area of light effect (10-500) 315 | - **light_falloff**: Rate of light falloff (0.1-3.0) 316 | - **light_from_behind**: Toggle backlighting effect 317 | 318 | #### Light Color 319 | - **use_kelvin_temperature**: Use color temperature instead of RGB 320 | - **kelvin_temperature**: Light color temperature (2000-10000K) 321 | - **light_color_r/g/b**: RGB components of light color (0-255) 322 | 323 | #### Advanced Lighting 324 | - **enable_normal_mapping**: Enable 3D-like lighting effects 325 | - **enable_specular**: Add specular highlights 326 | - **specular_intensity**: Strength of highlights (0.0-1.0) 327 | - **specular_shininess**: Sharpness of highlights (1-128) 328 | - **ambient_light**: Global illumination level (0.0-1.0) 329 | - **light_source_height**: Height of light source (50-500) 330 | 331 | #### Shadow Controls 332 | - **enable_shadow**: Toggle shadow generation 333 | - **shadow_opacity**: Shadow transparency (0.0-1.0) 334 | - **shadow_blur**: Shadow edge softness (0-50) 335 | - **shadow_direction_x**: Horizontal shadow offset (-50 to 50) 336 | - **shadow_direction_y**: Vertical shadow offset (-50 to 50) 337 | - **shadow_expansion**: Shadow size adjustment (-10 to 20) 338 | - **shadow_color_r/g/b**: RGB components of shadow color (0-255) 339 | 340 | #### Advanced Shadow 341 | - **perspective_shadow**: Enable perspective-based shadows 342 | - **distance_fade**: Fade shadow with distance 343 | - **fade_distance**: Distance at which shadow begins to fade (10-500) 344 | - **soft_edges**: Toggle soft shadow edges 345 | 346 | --- 347 | 348 | ## Keyframe Position Node: Geeky RemB Keyframe Position 349 | 350 | Provides precise control over animation through keyframe-based positioning. 351 | 352 | ### Key Features 353 | 354 | - **Frame-Specific Controls**: 355 | - Position, scale, and rotation settings for specific frames 356 | - Opacity control for visibility transitions 357 | - Easing function selection for smooth interpolation 358 | 359 | ### Usage Instructions 360 | 361 | 1. **Creating Keyframes**: 362 | - Add the Geeky RemB Keyframe Position node to your workflow 363 | - Set `frame_number` to the target frame 364 | - Adjust `x_position` and `y_position` for placement 365 | - Set `scale` and `rotation` as needed 366 | - Connect multiple keyframe nodes to the Animator's keyframe inputs 367 | 368 | 2. **Keyframe Configuration**: 369 | - Define the canvas size with `width` and `height` 370 | - Set `opacity` for transparency control 371 | - Select an `easing` function for interpolation between keyframes 372 | 373 | 3. **Building Keyframe Sequences**: 374 | - Create multiple keyframe nodes with different frame numbers 375 | - Connect them to consecutive keyframe inputs on the Animator node 376 | - Enable `use_keyframes` on the Animator node 377 | 378 | ### Parameters Guide 379 | 380 | #### Canvas Settings 381 | - **width**: Width of the animation canvas (64-4096) 382 | - **height**: Height of the animation canvas (64-4096) 383 | 384 | #### Keyframe Controls 385 | - **frame_number**: Target frame for this keyframe (0-1000) 386 | - **x_position**: Horizontal position at this keyframe (-2048 to 2048) 387 | - **y_position**: Vertical position at this keyframe (-2048 to 2048) 388 | - **scale**: Scale factor at this keyframe (0.1-5.0) 389 | - **rotation**: Rotation angle at this keyframe (-360.0 to 360.0) 390 | - **opacity**: Transparency at this keyframe (0.0-1.0) 391 | 392 | #### Interpolation 393 | - **easing**: Easing function for interpolation to the next keyframe 394 | - Options match the easing functions available in the Animator node 395 | 396 | --- 397 | 398 | ## Workflow Examples 399 | 400 | ### Basic Background Removal 401 | 402 | 1. Connect an image source to Geeky RemB's `foreground` input 403 | 2. Set `enable_background_removal` to true 404 | 3. Choose `rembg` as the removal method 405 | 4. Select an appropriate model (e.g., `u2net` for general purposes, `isnet-anime` for anime images) 406 | 5. Adjust mask processing parameters as needed 407 | 6. Connect the output to your workflow 408 | 409 | ### Animated Character with Lighting 410 | 411 | 1. Add a Geeky RemB Animator node 412 | - Set `animation_type` to `bounce` 413 | - Set `animation_speed` to 1.0 414 | - Set `repeats` to 2 415 | - Set `easing_function` to `ease_in_out_sine` 416 | 417 | 2. Add a Geeky RemB Light & Shadow node 418 | - Enable `enable_lighting` and `enable_shadow` 419 | - Set `light_direction_x` to -50 and `light_direction_y` to -100 420 | - Set `shadow_opacity` to 0.4 421 | - Set `shadow_blur` to 15 422 | 423 | 3. Connect both to a Geeky RemB node 424 | - Connect your character image to `foreground` 425 | - Connect a background image to `background` 426 | - Set `frames` to 30 427 | - Set `enable_background_removal` to true 428 | 429 | ### Keyframe Animation Sequence 430 | 431 | 1. Create multiple Geeky RemB Keyframe Position nodes: 432 | - Keyframe 1: `frame_number`: 0, `x_position`: 0, `y_position`: 0 433 | - Keyframe 2: `frame_number`: 15, `x_position`: 200, `y_position`: -50, `rotation`: 45 434 | - Keyframe 3: `frame_number`: 30, `x_position`: 400, `y_position`: 0, `rotation`: 0 435 | 436 | 2. Add a Geeky RemB Animator node: 437 | - Set `use_keyframes` to true 438 | - Connect the keyframe nodes to `keyframe1`, `keyframe2`, and `keyframe3` 439 | - Set `fps` to 30 440 | 441 | 3. Connect to a Geeky RemB node: 442 | - Set `frames` to 30 443 | - Set other parameters as needed 444 | 445 | --- 446 | 447 | ## Advanced Features 448 | 449 | ### Multi-frame Processing with Thread Pooling 450 | 451 | GeekyRemB optimizes performance by using thread pools to process multiple frames in parallel. This makes it efficient for handling animations and batch processing. 452 | 453 | ### Sophisticated Caching System 454 | 455 | An LRU (Least Recently Used) cache system is implemented to store and reuse processed frames, reducing redundant computations and improving performance. 456 | 457 | ### Edge Refinement Techniques 458 | 459 | Multiple edge processing methods are available, including alpha matting, edge detection, and mask refinement, enabling high-quality results even with complex subjects. 460 | 461 | ### Perspective Shadow Generation 462 | 463 | The shadow system can create realistic perspective-based shadows that simulate the effect of a 3D light source, adding depth to compositions. 464 | 465 | ### Normal Mapping for 3D-like Lighting 466 | 467 | Advanced lighting effects include normal mapping, which simulates surface details for more realistic illumination without requiring actual 3D models. 468 | 469 | --- 470 | 471 | ## Troubleshooting 472 | 473 | ### Common Issues and Solutions 474 | 475 | 1. **Slow Background Removal** 476 | - Try using a lighter model like `u2netp` instead of `u2net` 477 | - Reduce the image size before processing 478 | - Ensure GPU acceleration is available and enabled 479 | 480 | 2. **Poor Edge Quality** 481 | - Enable `alpha_matting` for better edge refinement 482 | - Adjust `mask_blur` for smoother edges 483 | - Try different `mask_expansion` values 484 | 485 | 3. **Memory Issues with Animation** 486 | - Reduce the number of frames 487 | - Lower the resolution of input images 488 | - Close other memory-intensive applications 489 | 490 | 4. **Missing Shadow or Light Effects** 491 | - Verify that both `enable_lighting` and `enable_shadow` are turned on 492 | - Check that the Light & Shadow node is connected to the main node 493 | - Adjust direction values to ensure effects are visible 494 | 495 | 5. **Keyframes Not Working** 496 | - Confirm that `use_keyframes` is enabled on the Animator node 497 | - Check that keyframe nodes are connected in the correct order 498 | - Verify that frame numbers are set correctly and within range 499 | 500 | ### Performance Optimization 501 | 502 | - Use the appropriate rembg model for your needs - lighter models like `u2netp` are faster 503 | - For batch processing, set a reasonable number of frames to avoid memory issues 504 | - Adjust thread count if needed by modifying the `max_workers` value in the code 505 | - Pre-process images to reduce resolution before applying effects 506 | 507 | --- 508 | 509 | ## License 510 | 511 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details. 512 | 513 | **Note**: Some included models may have separate licensing requirements for commercial use. 514 | 515 | --- 516 | 517 | ## Acknowledgements 518 | 519 | GeekyRemB builds upon several outstanding open-source projects: 520 | 521 | - [Rembg](https://github.com/danielgatis/rembg) by Daniel Gatis: Core background removal capabilities 522 | - [ComfyUI](https://github.com/comfyanonymous/ComfyUI): The foundation of our node system 523 | - [WAS Node Suite](https://github.com/WASasquatch/was-node-suite-comfyui): Inspiration for layer utility features 524 | 525 | Special thanks to: 526 | 527 | - **The ComfyUI Community**: For valuable feedback and suggestions 528 | - **Open-Source Contributors**: Who help improve the nodes continuously 529 | - **AI Model Creators**: Whose work enables our advanced background removal features 530 | 531 | --- 532 | 533 | For updates, issues, or contributions, please visit the [GitHub repository](https://github.com/GeekyGhost/ComfyUI-GeekyRemB). We welcome feedback and contributions from the community. 534 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- 1 | from .scripts.GeekyRembv2 import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS 2 | 3 | # Optionally, you can also add: 4 | __all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS'] 5 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [project] 2 | name = "comfyui-geekyremb" 3 | description = "GeekyRemB is a powerful suite of image processing nodes for ComfyUI, offering advanced background removal, animation, lighting effects, and keyframe-based positioning. Built on the rembg library with additional capabilities for chroma keying, mask refinement, realistic lighting, shadow generation, and dynamic animations." 4 | version = "1.0.0" 5 | license = {file = "LICENSE"} 6 | dependencies = [ 7 | "numpy", 8 | "rembg", 9 | "Pillow", 10 | "torch", 11 | "opencv-python", 12 | "tqdm", 13 | "huggingface-hub", 14 | "scikit-image", 15 | "scipy", 16 | "onnxruntime-gpu", # For GPU support 17 | "transformers" 18 | ] 19 | 20 | [project.urls] 21 | Repository = "https://github.com/GeekyGhost/ComfyUI-GeekyRemB" 22 | # Used by Comfy Registry https://comfyregistry.org 23 | 24 | [tool.comfy] 25 | PublisherId = "" 26 | DisplayName = "ComfyUI-GeekyRemB" 27 | Icon = "" 28 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | opencv-python 3 | torch 4 | torchvision 5 | Pillow 6 | rembg 7 | transformers 8 | tqdm 9 | huggingface-hub 10 | scikit-image 11 | scipy 12 | onnxruntime-gpu 13 | -------------------------------------------------------------------------------- /scripts/GeekyRembv2.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | import cv2 4 | from PIL import Image, ImageFilter, ImageOps, ImageEnhance 5 | from rembg import remove, new_session 6 | from enum import Enum, auto 7 | from dataclasses import dataclass 8 | import math 9 | from tqdm import tqdm 10 | from scipy import ndimage 11 | from concurrent.futures import ThreadPoolExecutor 12 | from typing import List, Tuple, Optional, Union, Dict, Callable 13 | from threading import Lock 14 | from multiprocessing import cpu_count 15 | import os 16 | import gc 17 | import logging 18 | import warnings 19 | 20 | # Configure logging 21 | logging.basicConfig(level=logging.INFO) 22 | logger = logging.getLogger(__name__) 23 | 24 | def tensor2pil(image): 25 | """Convert a PyTorch tensor to a PIL Image""" 26 | try: 27 | # Move to CPU if on GPU 28 | if image.device != 'cpu': 29 | image = image.cpu() 30 | # Convert to numpy array 31 | return Image.fromarray(np.clip(255. * image.numpy().squeeze(), 0, 255).astype(np.uint8)) 32 | except Exception as e: 33 | logger.error(f"Error converting tensor to PIL: {str(e)}") 34 | return Image.new('RGB', (image.shape[-2], image.shape[-1]), (0, 0, 0)) 35 | 36 | def pil2tensor(image): 37 | """Convert a PIL Image to a PyTorch tensor""" 38 | try: 39 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 40 | except Exception as e: 41 | logger.error(f"Error converting PIL to tensor: {str(e)}") 42 | return torch.zeros((1, 3, image.size[1], image.size[0])) 43 | 44 | def debug_tensor_info(tensor, name="Tensor"): 45 | """Utility function to debug tensor information""" 46 | try: 47 | logger.info(f"{name} shape: {tensor.shape}") 48 | logger.info(f"{name} dtype: {tensor.dtype}") 49 | logger.info(f"{name} device: {tensor.device}") 50 | logger.info(f"{name} min: {tensor.min()}") 51 | logger.info(f"{name} max: {tensor.max()}") 52 | except Exception as e: 53 | logger.error(f"Error debugging tensor info: {str(e)}") 54 | 55 | # Easing Functions 56 | def linear(t): 57 | return t 58 | 59 | def ease_in_quad(t): 60 | return t * t 61 | 62 | def ease_out_quad(t): 63 | return t * (2 - t) 64 | 65 | def ease_in_out_quad(t): 66 | return 2*t*t if t < 0.5 else -1 + (4 - 2*t)*t 67 | 68 | def ease_in_cubic(t): 69 | return t ** 3 70 | 71 | def ease_out_cubic(t): 72 | return (t - 1) ** 3 + 1 73 | 74 | def ease_in_out_cubic(t): 75 | return 4*t*t*t if t < 0.5 else (t-1)*(2*t-2)*(2*t-2)+1 76 | 77 | # Add more easing functions as needed 78 | 79 | EASING_FUNCTIONS = { 80 | "linear": linear, 81 | "ease_in_quad": ease_in_quad, 82 | "ease_out_quad": ease_out_quad, 83 | "ease_in_out_quad": ease_in_out_quad, 84 | "ease_in_cubic": ease_in_cubic, 85 | "ease_out_cubic": ease_out_cubic, 86 | "ease_in_out_cubic": ease_in_out_cubic, 87 | # Add more mappings 88 | } 89 | 90 | class AnimationType(Enum): 91 | NONE = "none" 92 | BOUNCE = "bounce" 93 | TRAVEL_LEFT = "travel_left" 94 | TRAVEL_RIGHT = "travel_right" 95 | ROTATE = "rotate" 96 | FADE_IN = "fade_in" 97 | FADE_OUT = "fade_out" 98 | ZOOM_IN = "zoom_in" 99 | ZOOM_OUT = "zoom_out" 100 | SCALE_BOUNCE = "scale_bounce" # Existing animation type 101 | SPIRAL = "spiral" # Existing animation type 102 | SHAKE = "shake" # New animation type 103 | SLIDE_UP = "slide_up" # New animation type 104 | SLIDE_DOWN = "slide_down" # New animation type 105 | FLIP_HORIZONTAL = "flip_horizontal" # New animation type 106 | FLIP_VERTICAL = "flip_vertical" # New animation type 107 | WAVE = "wave" # New animation type 108 | PULSE = "pulse" # New animation type 109 | SWING = "swing" # New animation type 110 | SPIN = "spin" # Additional new animation type 111 | FLASH = "flash" # Additional new animation type 112 | 113 | @dataclass 114 | @dataclass 115 | class ProcessingConfig: 116 | """Configuration for image processing parameters""" 117 | enable_background_removal: bool = True 118 | removal_method: str = "rembg" 119 | model: str = "u2net" 120 | # Chroma key color can now be a tuple (R,G,B) or string 121 | chroma_key_color: Union[str, Tuple[int, int, int]] = "green" 122 | chroma_key_tolerance: float = 0.1 123 | mask_expansion: int = 0 124 | edge_detection: bool = False 125 | edge_thickness: int = 1 126 | # Edge color in RGBA 127 | edge_color: Tuple[int, int, int, int] = (0, 0, 0, 255) 128 | mask_blur: int = 5 129 | threshold: float = 0.5 130 | invert_generated_mask: bool = False 131 | remove_small_regions: bool = False 132 | small_region_size: int = 100 133 | alpha_matting: bool = False 134 | alpha_matting_foreground_threshold: int = 240 135 | alpha_matting_background_threshold: int = 10 136 | # New parameters 137 | easing_function: str = "linear" # Default easing 138 | repeats: int = 1 # Number of repeats 139 | reverse: bool = False # Whether to reverse after each repeat 140 | delay: float = 0.0 # Delay before animation starts (in seconds or frames) 141 | animation_duration: float = 1.0 # Duration of one animation cycle 142 | steps: int = 1 # Number of steps in animation 143 | phase_shift: float = 0.0 # Phase shift for staggered animations 144 | 145 | class EnhancedBlendMode: 146 | """Enhanced blend mode operations with optimized processing""" 147 | 148 | @staticmethod 149 | def _ensure_rgba(img: np.ndarray) -> np.ndarray: 150 | if len(img.shape) == 3: 151 | if img.shape[2] == 3: 152 | alpha = np.ones((*img.shape[:2], 1), dtype=img.dtype) * 255 153 | return np.concatenate([img, alpha], axis=-1) 154 | return img 155 | return np.stack([img] * 4, axis=-1) 156 | 157 | @staticmethod 158 | def _apply_blend(target: np.ndarray, blend: np.ndarray, operation, opacity: float = 1.0) -> np.ndarray: 159 | target = EnhancedBlendMode._ensure_rgba(target).astype(np.float32) 160 | blend = EnhancedBlendMode._ensure_rgba(blend).astype(np.float32) 161 | 162 | target = target / 255.0 163 | blend = blend / 255.0 164 | 165 | target_rgb = target[..., :3] 166 | blend_rgb = blend[..., :3] 167 | target_a = target[..., 3:4] 168 | blend_a = blend[..., 3:4] 169 | 170 | result_rgb = operation(target_rgb, blend_rgb) 171 | result_a = target_a + blend_a * (1 - target_a) * opacity 172 | 173 | result = np.concatenate([ 174 | result_rgb * opacity + target_rgb * (1 - opacity), 175 | result_a 176 | ], axis=-1) 177 | 178 | return (np.clip(result, 0, 1) * 255).astype(np.uint8) 179 | 180 | @classmethod 181 | def get_blend_modes(cls) -> Dict[str, Callable]: 182 | return { 183 | "normal": cls.normal, 184 | "multiply": cls.multiply, 185 | "screen": cls.screen, 186 | "overlay": cls.overlay, 187 | "soft_light": cls.soft_light, 188 | "hard_light": cls.hard_light, 189 | "difference": cls.difference, 190 | "exclusion": cls.exclusion, 191 | "color_dodge": cls.color_dodge, 192 | "color_burn": cls.color_burn, 193 | "linear_light": cls.linear_light, # New blend mode 194 | "pin_light": cls.pin_light, # New blend mode 195 | # Add more blend modes as needed 196 | } 197 | 198 | @staticmethod 199 | def normal(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 200 | return EnhancedBlendMode._apply_blend(target, blend, lambda t, b: b, opacity) 201 | 202 | @staticmethod 203 | def multiply(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 204 | return EnhancedBlendMode._apply_blend(target, blend, lambda t, b: t * b, opacity) 205 | 206 | @staticmethod 207 | def screen(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 208 | return EnhancedBlendMode._apply_blend(target, blend, lambda t, b: 1 - (1 - t) * (1 - b), opacity) 209 | 210 | @staticmethod 211 | def overlay(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 212 | def overlay_op(t, b): 213 | return np.where(t > 0.5, 1 - 2 * (1 - t) * (1 - b), 2 * t * b) 214 | return EnhancedBlendMode._apply_blend(target, blend, overlay_op, opacity) 215 | 216 | @staticmethod 217 | def soft_light(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 218 | def soft_light_op(t, b): 219 | return np.where(b > 0.5, 220 | t + (2 * b - 1) * (t - t * t), 221 | t - (1 - 2 * b) * t * (1 - t)) 222 | return EnhancedBlendMode._apply_blend(target, blend, soft_light_op, opacity) 223 | 224 | @staticmethod 225 | def hard_light(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 226 | def hard_light_op(t, b): 227 | return np.where(b > 0.5, 228 | 1 - 2 * (1 - t) * (1 - b), 229 | 2 * t * b) 230 | return EnhancedBlendMode._apply_blend(target, blend, hard_light_op, opacity) 231 | 232 | @staticmethod 233 | def difference(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 234 | return EnhancedBlendMode._apply_blend(target, blend, lambda t, b: np.abs(t - b), opacity) 235 | 236 | @staticmethod 237 | def exclusion(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 238 | return EnhancedBlendMode._apply_blend(target, blend, lambda t, b: t + b - 2 * t * b, opacity) 239 | 240 | @staticmethod 241 | def color_dodge(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 242 | def color_dodge_op(t, b): 243 | return np.where(b >= 1, 1, np.minimum(1, t / (1 - b + 1e-6))) 244 | return EnhancedBlendMode._apply_blend(target, blend, color_dodge_op, opacity) 245 | 246 | @staticmethod 247 | def color_burn(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 248 | def color_burn_op(t, b): 249 | return np.where(b <= 0, 0, np.maximum(0, 1 - (1 - t) / (b + 1e-6))) 250 | return EnhancedBlendMode._apply_blend(target, blend, color_burn_op, opacity) 251 | 252 | @staticmethod 253 | def linear_light(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 254 | def linear_light_op(t, b): 255 | return np.clip(2 * b + t - 1, 0, 1) 256 | return EnhancedBlendMode._apply_blend(target, blend, linear_light_op, opacity) 257 | 258 | @staticmethod 259 | def pin_light(target: np.ndarray, blend: np.ndarray, opacity: float = 1.0) -> np.ndarray: 260 | def pin_light_op(t, b): 261 | return np.where(b > 0.5, 262 | np.maximum(t, 2 * (b - 0.5)), 263 | np.minimum(t, 2 * b)) 264 | return EnhancedBlendMode._apply_blend(target, blend, pin_light_op, opacity) 265 | 266 | class EnhancedMaskProcessor: 267 | """Enhanced mask processing with advanced refinement techniques""" 268 | 269 | @staticmethod 270 | def refine_mask(mask: Image.Image, config: ProcessingConfig, original_image: Image.Image) -> Image.Image: 271 | """Enhanced mask refinement with improved edge detection and color control""" 272 | try: 273 | # Convert mask to numpy array 274 | mask_np = np.array(mask) 275 | if len(mask_np.shape) > 2: 276 | if mask_np.shape[2] == 4: 277 | mask_np = mask_np[:, :, 3] 278 | else: 279 | mask_np = mask_np[:, :, 0] 280 | 281 | # Initial binary threshold 282 | _, binary_mask = cv2.threshold( 283 | mask_np, 284 | 127, 285 | 255, 286 | cv2.THRESH_BINARY 287 | ) 288 | 289 | # Enhanced Edge Detection 290 | if config.edge_detection: 291 | # Detect edges using Canny 292 | edges = cv2.Canny(binary_mask, 100, 200) 293 | 294 | # Create kernel based on edge_thickness 295 | kernel = cv2.getStructuringElement( 296 | cv2.MORPH_ELLIPSE, 297 | (config.edge_thickness, config.edge_thickness) 298 | ) 299 | 300 | # Dilate edges 301 | edges = cv2.dilate(edges, kernel) 302 | 303 | # Convert edge color from grayscale to color mask 304 | edge_mask = np.zeros((*binary_mask.shape, 4), dtype=np.uint8) 305 | edge_mask[edges > 0] = config.edge_color # Edge color from config 306 | 307 | # Blend edges with original mask 308 | binary_mask = cv2.addWeighted( 309 | binary_mask, 310 | 0.7, 311 | edges, 312 | 0.3, 313 | 0 314 | ) 315 | 316 | # Handle mask expansion 317 | if config.mask_expansion != 0: 318 | kernel = np.ones((2, 2), np.uint8) 319 | if config.mask_expansion > 0: 320 | binary_mask = cv2.dilate(binary_mask, kernel, iterations=1) 321 | else: 322 | binary_mask = cv2.erode(binary_mask, kernel, iterations=1) 323 | 324 | # Apply minimal blur for anti-aliasing 325 | if config.mask_blur > 0: 326 | blur_amount = min(config.mask_blur, 3) 327 | binary_mask = cv2.GaussianBlur( 328 | binary_mask, 329 | (blur_amount*2+1, blur_amount*2+1), 330 | 0 331 | ) 332 | 333 | if config.invert_generated_mask: 334 | binary_mask = 255 - binary_mask 335 | 336 | return Image.fromarray(binary_mask.astype(np.uint8), 'L') 337 | 338 | except Exception as e: 339 | logger.error(f"Mask refinement failed: {str(e)}") 340 | return mask 341 | 342 | class EnhancedAnimator: 343 | """Enhanced animation processing with additional effects""" 344 | 345 | @staticmethod 346 | def animate_element( 347 | element: Image.Image, 348 | animation_type: str, 349 | animation_speed: float, 350 | frame_number: int, 351 | total_frames: int, 352 | x_start: int, 353 | y_start: int, 354 | canvas_width: int, 355 | canvas_height: int, 356 | scale: float, 357 | rotation: float, 358 | easing_func: Callable[[float], float], 359 | repeat: int, 360 | reverse: bool, 361 | delay: float, 362 | steps: int = 1, 363 | phase_shift: float = 0.0 364 | ) -> Tuple[Image.Image, int, int]: 365 | # Adjust frame_number based on delay 366 | adjusted_frame = frame_number - int(delay * total_frames) 367 | if adjusted_frame < 0: 368 | return element, x_start, y_start # No animation yet 369 | 370 | # Handle repeats 371 | cycle_length = total_frames / repeat 372 | current_cycle = int(adjusted_frame / cycle_length) 373 | frame_in_cycle = adjusted_frame % cycle_length 374 | progress = frame_in_cycle / cycle_length # Normalized progress within the cycle 375 | 376 | # Apply easing function 377 | progress = easing_func(progress) 378 | 379 | # Handle reverse 380 | if reverse and current_cycle % 2 == 1: 381 | progress = 1 - progress 382 | 383 | orig_width, orig_height = element.size 384 | 385 | if element.mode != 'RGBA': 386 | element = element.convert('RGBA') 387 | 388 | # Calculate the bounding box of visible pixels to determine rotation center 389 | bbox = element.getbbox() 390 | if bbox: 391 | cropped = element.crop(bbox) 392 | center_x = cropped.width // 2 393 | center_y = cropped.height // 2 394 | else: 395 | cropped = element 396 | center_x, center_y = element.width // 2, element.height // 2 397 | 398 | # Apply scaling 399 | new_size = (int(orig_width * scale), int(orig_height * scale)) 400 | element = element.resize(new_size, Image.LANCZOS) 401 | 402 | # Apply rotation around the center of visible pixels 403 | if rotation != 0: 404 | # Calculate bounding box for the rotated image 405 | rotated_image = element.rotate( 406 | rotation, 407 | resample=Image.BICUBIC, 408 | expand=True 409 | ) 410 | rotated_width, rotated_height = rotated_image.size 411 | new_canvas = Image.new("RGBA", (rotated_width, rotated_height), (0, 0, 0, 0)) 412 | offset_x = (rotated_width - element.width) // 2 413 | offset_y = (rotated_height - element.height) // 2 414 | new_canvas.paste(element, (offset_x, offset_y)) 415 | element = new_canvas.rotate( 416 | rotation, 417 | resample=Image.BICUBIC, 418 | expand=False 419 | ) 420 | 421 | x, y = x_start, y_start 422 | 423 | # Apply steps and phase_shift for staggered animations 424 | if steps > 1: 425 | step_progress = progress * steps 426 | current_step = int(step_progress) 427 | progress = step_progress - current_step 428 | x += int(phase_shift * current_step) 429 | y += int(phase_shift * current_step) 430 | progress = min(progress, 1.0) 431 | 432 | # Existing animation types with adjusted progress 433 | if animation_type == AnimationType.BOUNCE.value: 434 | y_offset = int(math.sin(progress * 2 * math.pi) * animation_speed * 50) 435 | y += y_offset 436 | 437 | elif animation_type == AnimationType.SCALE_BOUNCE.value: 438 | scale_factor = 1 + math.sin(progress * 2 * math.pi) * animation_speed * 0.2 439 | scaled_size = (int(element.width * scale_factor), int(element.height * scale_factor)) 440 | element = element.resize(scaled_size, Image.LANCZOS) 441 | x -= (scaled_size[0] - new_size[0]) // 2 442 | y -= (scaled_size[1] - new_size[1]) // 2 443 | 444 | elif animation_type == AnimationType.SPIRAL.value: 445 | radius = 50 * animation_speed 446 | angle = progress * 4 * math.pi 447 | x += int(radius * math.cos(angle)) 448 | y += int(radius * math.sin(angle)) 449 | element = element.rotate( 450 | angle * 180 / math.pi, 451 | resample=Image.BICUBIC, 452 | expand=True, 453 | center=(center_x, center_y) 454 | ) 455 | 456 | elif animation_type == AnimationType.TRAVEL_LEFT.value: 457 | x = int(canvas_width - (canvas_width + element.width) * progress) 458 | 459 | elif animation_type == AnimationType.TRAVEL_RIGHT.value: 460 | x = int(-element.width + (canvas_width + element.width) * progress) 461 | 462 | elif animation_type == AnimationType.ROTATE.value: 463 | spin_speed = 360 * animation_speed # degrees per cycle 464 | angle = progress * spin_speed 465 | element = element.rotate( 466 | angle, 467 | resample=Image.BICUBIC, 468 | expand=True, 469 | center=(center_x, center_y) 470 | ) 471 | 472 | elif animation_type == AnimationType.FADE_IN.value: 473 | opacity = int(progress * 255) 474 | r, g, b, a = element.split() 475 | a = a.point(lambda i: i * opacity // 255) 476 | element = Image.merge('RGBA', (r, g, b, a)) 477 | 478 | elif animation_type == AnimationType.FADE_OUT.value: 479 | opacity = int((1 - progress) * 255) 480 | r, g, b, a = element.split() 481 | a = a.point(lambda i: i * opacity // 255) 482 | element = Image.merge('RGBA', (r, g, b, a)) 483 | 484 | elif animation_type == AnimationType.ZOOM_IN.value: 485 | zoom_scale = 1 + progress * animation_speed 486 | new_width = int(orig_width * scale * zoom_scale) 487 | new_height = int(orig_height * scale * zoom_scale) 488 | element = element.resize((new_width, new_height), Image.LANCZOS) 489 | 490 | # Center the zoomed image 491 | left = (new_width - orig_width * scale) / 2 492 | top = (new_height - orig_height * scale) / 2 493 | right = left + orig_width * scale 494 | bottom = top + orig_height * scale 495 | 496 | element = element.crop((int(left), int(top), int(right), int(bottom))) 497 | 498 | elif animation_type == AnimationType.ZOOM_OUT.value: 499 | zoom_scale = 1 + (1 - progress) * animation_speed 500 | new_width = int(orig_width * scale * zoom_scale) 501 | new_height = int(orig_height * scale * zoom_scale) 502 | element = element.resize((new_width, new_height), Image.LANCZOS) 503 | 504 | # Center the zoomed image 505 | left = (new_width - orig_width * scale) / 2 506 | top = (new_height - orig_height * scale) / 2 507 | right = left + orig_width * scale 508 | bottom = top + orig_height * scale 509 | 510 | element = element.crop((int(left), int(top), int(right), int(bottom))) 511 | 512 | # New animation types with adjusted progress 513 | elif animation_type == AnimationType.SHAKE.value: 514 | shake_amplitude = 10 * animation_speed 515 | x_offset = int(math.sin(progress * 10 * math.pi) * shake_amplitude) 516 | y_offset = int(math.cos(progress * 10 * math.pi) * shake_amplitude) 517 | x += x_offset 518 | y += y_offset 519 | 520 | elif animation_type == AnimationType.SLIDE_UP.value: 521 | y = int(y_start - (y_start + element.height) * progress) 522 | 523 | elif animation_type == AnimationType.SLIDE_DOWN.value: 524 | y = int(-element.height + (canvas_height + element.height) * progress) 525 | 526 | elif animation_type == AnimationType.FLIP_HORIZONTAL.value: 527 | if progress > 0.5: 528 | element = element.transpose(Image.FLIP_LEFT_RIGHT) 529 | 530 | elif animation_type == AnimationType.FLIP_VERTICAL.value: 531 | if progress > 0.5: 532 | element = element.transpose(Image.FLIP_TOP_BOTTOM) 533 | 534 | elif animation_type == AnimationType.WAVE.value: 535 | wave_amplitude = 20 * animation_speed 536 | wave_frequency = 2 537 | y_offset = int(math.sin(progress * wave_frequency * 2 * math.pi) * wave_amplitude) 538 | y += y_offset 539 | 540 | elif animation_type == AnimationType.PULSE.value: 541 | pulse_scale = 1 + 0.3 * math.sin(progress * 4 * math.pi) * animation_speed 542 | new_size = (int(orig_width * scale * pulse_scale), int(orig_height * scale * pulse_scale)) 543 | element = element.resize(new_size, Image.LANCZOS) 544 | x -= (new_size[0] - orig_width * scale) // 2 545 | y -= (new_size[1] - orig_height * scale) // 2 546 | 547 | elif animation_type == AnimationType.SWING.value: 548 | swing_amplitude = 15 * animation_speed 549 | swing_angle = math.sin(progress * 4 * math.pi) * swing_amplitude 550 | element = element.rotate( 551 | swing_angle, 552 | resample=Image.BICUBIC, 553 | expand=True, 554 | center=(center_x, center_y) 555 | ) 556 | 557 | elif animation_type == AnimationType.SPIN.value: 558 | spin_speed = 360 * animation_speed # degrees per cycle 559 | angle = progress * spin_speed 560 | element = element.rotate( 561 | angle, 562 | resample=Image.BICUBIC, 563 | expand=True, 564 | center=(center_x, center_y) 565 | ) 566 | 567 | elif animation_type == AnimationType.FLASH.value: 568 | brightness = 1 + 0.5 * math.sin(progress * 2 * math.pi) * animation_speed 569 | enhancer = ImageEnhance.Brightness(element) 570 | element = enhancer.enhance(brightness) 571 | 572 | # Add more animation types as needed 573 | 574 | return element, x, y 575 | 576 | class EnhancedGeekyRemB: 577 | def __init__(self): 578 | self.session = None 579 | self.session_lock = Lock() 580 | self.use_gpu = torch.cuda.is_available() 581 | self.config = ProcessingConfig() 582 | self.blend_modes = EnhancedBlendMode.get_blend_modes() 583 | self.mask_processor = EnhancedMaskProcessor() 584 | self.animator = EnhancedAnimator() 585 | 586 | # Enhanced thread pool configuration with proper error handling 587 | try: 588 | cpu_cores = cpu_count() 589 | except: 590 | cpu_cores = os.cpu_count() or 4 # Fallback if cpu_count fails 591 | 592 | self.max_workers = min(cpu_cores, 8) # Limit to reasonable number 593 | self.executor = ThreadPoolExecutor(max_workers=self.max_workers) 594 | 595 | # Enhanced cache configuration 596 | self.frame_cache = LRUCache(maxsize=100) 597 | 598 | def cleanup_resources(self): 599 | """Enhanced cleanup resources with better error handling""" 600 | try: 601 | if self.session is not None: 602 | self.session = None 603 | 604 | if self.executor is not None: 605 | try: 606 | self.executor.shutdown(wait=False) 607 | except Exception as e: 608 | logger.error(f"Error shutting down executor: {str(e)}") 609 | self.executor = None 610 | 611 | if self.frame_cache is not None: 612 | try: 613 | self.frame_cache.clear() 614 | except Exception as e: 615 | logger.error(f"Error clearing cache: {str(e)}") 616 | self.frame_cache = None 617 | 618 | # Force garbage collection 619 | gc.collect() 620 | if self.use_gpu: 621 | torch.cuda.empty_cache() 622 | 623 | except Exception as e: 624 | logger.error(f"Error during cleanup: {str(e)}") 625 | 626 | def __del__(self): 627 | """Enhanced destructor with better error handling""" 628 | try: 629 | self.cleanup_resources() 630 | except: 631 | pass 632 | 633 | @classmethod 634 | def INPUT_TYPES(cls): 635 | return { 636 | "required": { 637 | "output_format": (["RGBA", "RGB"],), 638 | "foreground": ("IMAGE",), 639 | "enable_background_removal": ("BOOLEAN", {"default": True}), 640 | "removal_method": (["rembg", "chroma_key"],), 641 | "model": ([ 642 | "u2net", "u2netp", "u2net_human_seg", "u2net_cloth_seg", 643 | "silueta", "isnet-general-use", "isnet-anime" 644 | ],), 645 | "chroma_key_color": (["green", "blue", "red"],), 646 | "chroma_key_tolerance": ("FLOAT", {"default": 0.1, "min": 0.0, "max": 1.0, "step": 0.01}), 647 | "mask_expansion": ("INT", {"default": 0, "min": -100, "max": 100, "step": 1}), 648 | "edge_detection": ("BOOLEAN", {"default": False}), 649 | "edge_thickness": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), 650 | "mask_blur": ("INT", {"default": 5, "min": 0, "max": 100, "step": 1}), 651 | "threshold": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 0.01}), 652 | "invert_generated_mask": ("BOOLEAN", {"default": False}), 653 | "remove_small_regions": ("BOOLEAN", {"default": False}), 654 | "small_region_size": ("INT", {"default": 100, "min": 1, "max": 1000, "step": 1}), 655 | "alpha_matting": ("BOOLEAN", {"default": False}), 656 | "alpha_matting_foreground_threshold": ("INT", {"default": 240, "min": 0, "max": 255, "step": 1}), 657 | "alpha_matting_background_threshold": ("INT", {"default": 10, "min": 0, "max": 255, "step": 1}), 658 | "animation_type": ([anim.value for anim in AnimationType],), 659 | "animation_speed": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 10.0, "step": 0.1}), 660 | "animation_duration": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 10.0, "step": 0.1}), 661 | "repeats": ("INT", {"default": 1, "min": 1, "max": 100, "step": 1}), 662 | "reverse": ("BOOLEAN", {"default": False}), 663 | "easing_function": (list(EASING_FUNCTIONS.keys()),), 664 | "delay": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 5.0, "step": 0.1}), 665 | "animation_frames": ("INT", {"default": 1, "min": 1, "max": 3000, "step": 1}), 666 | "x_position": ("INT", {"default": 0, "min": -1000, "max": 1000, "step": 1}), 667 | "y_position": ("INT", {"default": 0, "min": -1000, "max": 1000, "step": 1}), 668 | "scale": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 5.0, "step": 0.1}), 669 | "rotation": ("FLOAT", {"default": 0, "min": -360, "max": 360, "step": 1}), 670 | "blend_mode": (list(EnhancedBlendMode.get_blend_modes().keys()),), 671 | "opacity": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1.0, "step": 0.01}), 672 | "aspect_ratio": ("STRING", { 673 | "default": "", 674 | "placeholder": "e.g., 16:9, 4:3, 1:1, portrait, landscape" 675 | }), 676 | "steps": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), 677 | "phase_shift": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.1}), 678 | }, 679 | "optional": { 680 | "background": ("IMAGE",), 681 | "additional_mask": ("MASK",), 682 | "invert_additional_mask": ("BOOLEAN", {"default": False}), 683 | } 684 | } 685 | 686 | RETURN_TYPES = ("IMAGE", "MASK") 687 | FUNCTION = "process_image" 688 | CATEGORY = "image/processing" 689 | 690 | def initialize_model(self, model: str) -> None: 691 | """Thread-safe model initialization""" 692 | with self.session_lock: 693 | if self.session is None or getattr(self.session, 'model_name', None) != model: 694 | providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] if self.use_gpu else ['CPUExecutionProvider'] 695 | try: 696 | self.session = new_session(model, providers=providers) 697 | logger.info(f"Model '{model}' initialized successfully.") 698 | except Exception as e: 699 | logger.error(f"Failed to initialize model '{model}': {str(e)}") 700 | raise RuntimeError(f"Model initialization failed: {str(e)}") 701 | 702 | def remove_background_rembg(self, image: Image.Image) -> Tuple[Image.Image, Image.Image]: 703 | """Enhanced background removal using rembg with alpha matting""" 704 | try: 705 | if image.mode != 'RGBA': 706 | image = image.convert('RGBA') 707 | 708 | result = remove( 709 | image, 710 | session=self.session, 711 | alpha_matting=self.config.alpha_matting, 712 | alpha_matting_foreground_threshold=self.config.alpha_matting_foreground_threshold, 713 | alpha_matting_background_threshold=self.config.alpha_matting_background_threshold 714 | ) 715 | 716 | return result, result.split()[3] 717 | except Exception as e: 718 | logger.error(f"Background removal failed: {str(e)}") 719 | raise RuntimeError(f"Background removal failed: {str(e)}") 720 | 721 | def remove_background_chroma(self, image: Image.Image) -> Image.Image: 722 | """Simplified chroma key with better alpha handling""" 723 | try: 724 | # Convert to numpy array 725 | img_np = np.array(image) 726 | if img_np.shape[2] == 4: 727 | img_np = img_np[:, :, :3] 728 | 729 | # Convert to HSV color space 730 | hsv = cv2.cvtColor(img_np, cv2.COLOR_RGB2HSV) 731 | 732 | # Basic color ranges 733 | tolerance = int(30 * self.config.chroma_key_tolerance) 734 | saturation_min = 30 735 | value_min = 30 736 | 737 | # Color-specific ranges 738 | if self.config.chroma_key_color == "green": 739 | lower = np.array([55 - tolerance, saturation_min, value_min]) 740 | upper = np.array([65 + tolerance, 255, 255]) 741 | elif self.config.chroma_key_color == "blue": 742 | lower = np.array([110 - tolerance, saturation_min, value_min]) 743 | upper = np.array([130 + tolerance, 255, 255]) 744 | else: # red 745 | lower1 = np.array([0, saturation_min, value_min]) 746 | upper1 = np.array([tolerance, 255, 255]) 747 | lower2 = np.array([180 - tolerance, saturation_min, value_min]) 748 | upper2 = np.array([180, 255, 255]) 749 | 750 | mask1 = cv2.inRange(hsv, lower1, upper1) 751 | mask2 = cv2.inRange(hsv, lower2, upper2) 752 | mask = cv2.bitwise_or(mask1, mask2) 753 | 754 | # Create mask for non-red colors 755 | if self.config.chroma_key_color != "red": 756 | mask = cv2.inRange(hsv, lower, upper) 757 | 758 | # Invert mask (0 for background, 255 for foreground) 759 | mask = 255 - mask 760 | 761 | # Convert back to PIL and return 762 | return Image.fromarray(mask, 'L') 763 | 764 | except Exception as e: 765 | logger.error(f"Chroma key removal failed: {str(e)}") 766 | raise RuntimeError(f"Chroma key removal failed: {str(e)}") 767 | 768 | def process_frame(self, frame: Image.Image, background_frame: Optional[Image.Image], 769 | frame_number: int, total_frames: int) -> Tuple[Image.Image, Image.Image]: 770 | """Fixed frame processing""" 771 | try: 772 | if isinstance(frame, np.ndarray): 773 | frame = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) 774 | 775 | # Ensure RGBA mode 776 | frame = frame.convert('RGBA') 777 | 778 | if self.config.enable_background_removal: 779 | if self.config.removal_method == "rembg": 780 | frame_with_alpha, mask = self.remove_background_rembg(frame) 781 | frame = frame_with_alpha 782 | else: 783 | # Get mask 784 | mask = self.remove_background_chroma(frame) 785 | refined_mask = self.mask_processor.refine_mask(mask, self.config, frame) 786 | 787 | # Create new frame with refined alpha 788 | frame_array = np.array(frame) 789 | mask_array = np.array(refined_mask) 790 | 791 | # Ensure mask is in correct shape for alpha channel 792 | if len(mask_array.shape) == 2: 793 | mask_array = mask_array[:, :, None] 794 | 795 | # Apply mask to RGB channels 796 | frame_array = frame_array * (mask_array / 255.0) 797 | frame_array[:, :, 3] = mask_array[:, :, 0] # Set alpha channel 798 | 799 | frame = Image.fromarray(frame_array.astype(np.uint8), 'RGBA') 800 | else: 801 | mask = Image.new('L', frame.size, 255) 802 | 803 | # Animation handling remains the same 804 | animated_frame, x, y = self.animator.animate_element( 805 | frame, 806 | self.config.animation_type, 807 | self.config.animation_speed, 808 | frame_number, 809 | total_frames, 810 | self.config.x_position, 811 | self.config.y_position, 812 | background_frame.width if background_frame else frame.width, 813 | background_frame.height if background_frame else frame.height, 814 | self.config.scale, 815 | self.config.rotation, 816 | EASING_FUNCTIONS.get(self.config.easing_function, linear), 817 | self.config.repeats, 818 | self.config.reverse, 819 | self.config.delay, 820 | steps=self.config.steps, 821 | phase_shift=self.config.phase_shift 822 | ) 823 | 824 | # Handle background composition 825 | if background_frame is not None: 826 | bg = background_frame.convert('RGBA') 827 | result = Image.new('RGBA', bg.size, (0, 0, 0, 0)) 828 | result.paste(bg, (0, 0)) 829 | result.paste(animated_frame, (int(x), int(y)), animated_frame) 830 | animated_frame = result 831 | 832 | return animated_frame, mask 833 | 834 | except Exception as e: 835 | logger.error(f"Frame processing failed: {str(e)}") 836 | return frame, Image.new('L', frame.size, 255) 837 | 838 | def process_image(self, output_format, foreground, enable_background_removal, removal_method, model, 839 | chroma_key_color, chroma_key_tolerance, mask_expansion, edge_detection, 840 | edge_thickness, mask_blur, threshold, invert_generated_mask, 841 | remove_small_regions, small_region_size, alpha_matting, 842 | alpha_matting_foreground_threshold, alpha_matting_background_threshold, 843 | animation_type, animation_speed, animation_duration, repeats, reverse, 844 | easing_function, delay, animation_frames, x_position, y_position, 845 | scale, rotation, blend_mode, opacity, aspect_ratio, steps=1, 846 | phase_shift: float = 0.0, background=None, 847 | additional_mask=None, invert_additional_mask=False): 848 | try: 849 | # Store animation and processing parameters 850 | self.config.animation_type = animation_type 851 | self.config.animation_speed = animation_speed 852 | self.config.x_position = x_position 853 | self.config.y_position = y_position 854 | self.config.scale = scale 855 | self.config.rotation = rotation 856 | self.config.blend_mode = blend_mode 857 | self.config.opacity = opacity 858 | 859 | # Update config 860 | self.config.enable_background_removal = enable_background_removal 861 | self.config.removal_method = removal_method 862 | self.config.chroma_key_color = chroma_key_color 863 | self.config.chroma_key_tolerance = chroma_key_tolerance 864 | self.config.mask_expansion = mask_expansion 865 | self.config.edge_detection = edge_detection 866 | self.config.edge_thickness = edge_thickness 867 | self.config.mask_blur = mask_blur 868 | self.config.threshold = threshold 869 | self.config.invert_generated_mask = invert_generated_mask 870 | self.config.remove_small_regions = remove_small_regions 871 | self.config.small_region_size = small_region_size 872 | self.config.alpha_matting = alpha_matting 873 | self.config.alpha_matting_foreground_threshold = alpha_matting_foreground_threshold 874 | self.config.alpha_matting_background_threshold = alpha_matting_background_threshold 875 | # New config parameters 876 | self.config.easing_function = easing_function 877 | self.config.repeats = repeats 878 | self.config.reverse = reverse 879 | self.config.delay = delay 880 | self.config.animation_duration = animation_duration 881 | self.config.steps = steps 882 | self.config.phase_shift = phase_shift 883 | 884 | debug_tensor_info(foreground, "Input foreground") 885 | 886 | if enable_background_removal and removal_method == "rembg": 887 | self.initialize_model(model) 888 | 889 | # Convert inputs to PIL images 890 | fg_frames = [tensor2pil(foreground[i]) for i in range(foreground.shape[0])] 891 | bg_frames = [tensor2pil(background[i]) for i in range(background.shape[0])] if background is not None else None 892 | 893 | if bg_frames and len(bg_frames) < animation_frames: 894 | bg_frames = bg_frames * (animation_frames // len(bg_frames) + 1) 895 | bg_frames = bg_frames[:animation_frames] 896 | 897 | # Parse and apply aspect ratio if specified 898 | aspect_ratio_value = self.parse_aspect_ratio(aspect_ratio) 899 | if aspect_ratio_value is not None: 900 | for i in range(len(fg_frames)): 901 | new_width = int(fg_frames[i].width * scale) 902 | new_height = int(new_width / aspect_ratio_value) 903 | fg_frames[i] = fg_frames[i].resize((new_width, new_height), Image.LANCZOS) 904 | 905 | animated_frames = [] 906 | masks = [] 907 | 908 | with ThreadPoolExecutor(max_workers=self.max_workers) as executor: 909 | futures = [] 910 | for frame in range(animation_frames): 911 | fg_index = frame % len(fg_frames) 912 | bg_frame = bg_frames[frame % len(bg_frames)] if bg_frames else None 913 | 914 | future = executor.submit( 915 | self.process_frame, 916 | fg_frames[fg_index], 917 | bg_frame, 918 | frame, 919 | animation_frames 920 | ) 921 | futures.append(future) 922 | 923 | # Process results as they complete 924 | for future in tqdm(futures, desc="Processing frames"): 925 | try: 926 | result_frame, mask = future.result() 927 | 928 | # Handle additional mask if provided 929 | if additional_mask is not None: 930 | additional_mask_pil = tensor2pil( 931 | additional_mask[len(animated_frames) % len(additional_mask)] 932 | ) 933 | if invert_additional_mask: 934 | additional_mask_pil = ImageOps.invert(additional_mask_pil) 935 | mask = Image.fromarray( 936 | np.minimum(np.array(mask), np.array(additional_mask_pil)) 937 | ) 938 | 939 | # Cache results 940 | frame_key = f"frame_{len(animated_frames)}" 941 | self.frame_cache[frame_key] = (result_frame, mask) 942 | 943 | animated_frames.append(pil2tensor(result_frame)) 944 | masks.append(pil2tensor(mask.convert('L'))) 945 | 946 | except Exception as e: 947 | logger.error(f"Error processing frame {len(animated_frames)}: {str(e)}") 948 | if animated_frames: 949 | animated_frames.append(animated_frames[-1]) 950 | masks.append(masks[-1]) 951 | else: 952 | blank_frame = Image.new('RGBA', fg_frames[0].size, (0, 0, 0, 0)) 953 | blank_mask = Image.new('L', fg_frames[0].size, 0) 954 | animated_frames.append(pil2tensor(blank_frame)) 955 | masks.append(pil2tensor(blank_mask)) 956 | 957 | # Convert output format if needed 958 | if output_format == "RGB": 959 | for i in range(len(animated_frames)): 960 | frame = tensor2pil(animated_frames[i]) 961 | frame = frame.convert('RGB') 962 | animated_frames[i] = pil2tensor(frame) 963 | 964 | # Cleanup and return results 965 | try: 966 | result = torch.cat(animated_frames, dim=0) 967 | result_masks = torch.cat(masks, dim=0) 968 | debug_tensor_info(result, "Output result") 969 | debug_tensor_info(result_masks, "Output masks") 970 | return (result, result_masks) 971 | except Exception as e: 972 | logger.error(f"Error concatenating results: {str(e)}") 973 | return (foreground, torch.zeros_like(foreground[:, :1, :, :])) 974 | 975 | except Exception as e: 976 | logger.error(f"Error in process_image: {str(e)}") 977 | return (foreground, torch.zeros_like(foreground[:, :1, :, :])) 978 | 979 | def parse_aspect_ratio(self, aspect_ratio_input: str) -> Optional[float]: 980 | """Enhanced aspect ratio parsing with better error handling""" 981 | if not aspect_ratio_input: 982 | return None 983 | 984 | try: 985 | if ':' in aspect_ratio_input: 986 | w, h = map(float, aspect_ratio_input.split(':')) 987 | if h == 0: 988 | logger.warning("Invalid aspect ratio: height cannot be zero") 989 | return None 990 | return w / h 991 | 992 | try: 993 | return float(aspect_ratio_input) 994 | except ValueError: 995 | pass 996 | 997 | standard_ratios = { 998 | '4:3': 4/3, 999 | '16:9': 16/9, 1000 | '21:9': 21/9, 1001 | '1:1': 1, 1002 | 'square': 1, 1003 | 'portrait': 3/4, 1004 | 'landscape': 4/3 1005 | } 1006 | 1007 | return standard_ratios.get(aspect_ratio_input.lower()) 1008 | 1009 | except Exception as e: 1010 | logger.error(f"Error parsing aspect ratio: {str(e)}") 1011 | return None 1012 | 1013 | def cleanup(self): 1014 | """Cleanup resources""" 1015 | self.cleanup_resources() 1016 | 1017 | def __del__(self): 1018 | """Destructor to ensure proper cleanup""" 1019 | self.cleanup() 1020 | 1021 | # Helper class for frame caching 1022 | class LRUCache: 1023 | """Least Recently Used Cache implementation""" 1024 | 1025 | def __init__(self, maxsize: int = 100): 1026 | self.cache = {} 1027 | self.maxsize = maxsize 1028 | self.access_order = [] 1029 | self.lock = Lock() 1030 | 1031 | def __getitem__(self, key): 1032 | with self.lock: 1033 | if key in self.cache: 1034 | # Move to end to mark as recently used 1035 | self.access_order.remove(key) 1036 | self.access_order.append(key) 1037 | return self.cache[key] 1038 | raise KeyError(key) 1039 | 1040 | def __setitem__(self, key, value): 1041 | with self.lock: 1042 | if key in self.cache: 1043 | self.access_order.remove(key) 1044 | elif len(self.cache) >= self.maxsize: 1045 | # Remove least recently used item 1046 | lru_key = self.access_order.pop(0) 1047 | del self.cache[lru_key] 1048 | logger.debug(f"LRU cache evicted: {lru_key}") 1049 | 1050 | self.cache[key] = value 1051 | self.access_order.append(key) 1052 | logger.debug(f"LRU cache set: {key}") 1053 | 1054 | def clear(self): 1055 | with self.lock: 1056 | self.cache.clear() 1057 | self.access_order.clear() 1058 | logger.info("LRU cache cleared.") 1059 | 1060 | # Node class mappings 1061 | NODE_CLASS_MAPPINGS = { 1062 | "GeekyRemB": EnhancedGeekyRemB 1063 | } 1064 | 1065 | # Display name for the node 1066 | NODE_DISPLAY_NAME_MAPPINGS = { 1067 | "GeekyRemB": "Geeky RemB" 1068 | } 1069 | --------------------------------------------------------------------------------