├── .gitignore ├── LICENSE ├── README.md ├── __init__.py ├── example_workflows ├── example_workflow_1.json └── example_workflow_2.json ├── font_files ├── AURORA-PRO.otf ├── Akira Expanded Demo.otf ├── Another Danger - Demo.otf ├── Doctor Glitch.otf ├── Ghastly Panic.ttf ├── MetalGothic-Regular.ttf ├── Montserrat-Black.ttf ├── The Constellation.ttf ├── The-Augusta.otf ├── Vogue.ttf └── Wreckside.otf ├── helpers ├── logger.py └── utils.py ├── nodes ├── audio2video_node.py ├── canvas_settings_node.py ├── color_animations_node.py ├── font2img_node.py ├── scheduled_values_node.py ├── speech2text_node.py ├── string2file_node.py ├── text2speech_node.py ├── text_graphic_element_node.py └── video2audio_node.py ├── requirements.txt └── web └── js ├── scheduled_values.js ├── text_preview.js ├── vid_preview.js └── vid_upload.js /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | audio_temp.wav 3 | video_files 4 | audio_files 5 | text_files 6 | .env 7 | *.code-workspace 8 | .vscode/* -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Pascal Rössler 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![ezgif com-optimize(2)](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/f48b37c2-c3db-408f-ada8-a6bf336b6549) 2 | 3 | ![Static Badge](https://img.shields.io/badge/release-v1.0.0-black?style=plastic&logo=GitHub&logoColor=white&color=green) 4 | [![Custom Badge](https://img.shields.io/badge/buy-coffe-orange?style=plastic&logo=buymeacoffee&logoColor=white&link=URL)](https://buymeacoffee.com/foreigngods) 5 | 10 | Welcome to the ComfyUI-Mana-Nodes project! 11 | 12 | This collection of custom nodes is designed to supercharge text-based content creation within the ComfyUI environment. 13 | 14 | Whether you're working on dynamic captions, transcribing audio, or crafting engaging visual content, Mana Nodes has got you covered. 15 | 16 | If you like Mana Nodes, give our repo a [⭐ Star](https://github.com/ForeignGods/ComfyUI-Mana-Nodes) and [👀 Watch](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/subscription) our repository to stay updated. 17 | 18 | ## Installation 19 | You can install Mana Nodes via the [ComfyUI-Manager](https://github.com/ltdrdata/ComfyUI-Manager) 20 | 21 | Or simply clone the repo into the `custom_nodes` directory with this command: 22 | 23 | ``` 24 | git clone https://github.com/ForeignGods/ComfyUI-Mana-Nodes.git 25 | ``` 26 | 27 | and install the requirements using: 28 | ``` 29 | .\python_embed\python.exe -s -m pip install -r requirements.txt --user 30 | ``` 31 | 32 | If you are using a venv, make sure you have it activated before installation and use: 33 | ``` 34 | pip install -r requirements.txt 35 | ``` 36 | 37 | ## Nodes 38 | 39 |
40 | ✒️ Text to Image Generator 41 | 42 | #### Required Inputs 43 | 44 | #### `font` 45 | 46 | To set the font and its styling you need to input 🆗 Font Properties node here. 47 | 48 | #### `canvas` 49 | 50 | To configure the canvas input the 🖼️ Canvas Properties 51 | 52 | #### `text` 53 | 54 | Specifies the text to be rendered on the images. Supports multiline text input for rendering on separate lines. 55 | - For simple text: Input the text directly as a string. 56 | - For frame-specific text: Use a JSON-like format where each line specifies a frame number and the corresponding text. Example: 57 | ``` 58 | "1": "Hello", 59 | "10": "World", 60 | "20": "End" 61 | ``` 62 | 63 | #### `frame_count` 64 | 65 | Sets the amount of frames this node will output. 66 | 67 | #### Optional Inputs 68 | 69 | #### `transcription` 70 | 71 | Input the transcription output from the 🎤 Speech Recognition node here. 72 | Based on this transcription data, 🖼️ Canvas Properties and 🆗 Font Properties the text should be formatted in a way that builds up lines of words until there is no space on the canvas left (transcription_mode: fill, line). 73 | 74 | #### `highlight_font` 75 | 76 | Input a secondary font 🆗 Font Properties, that is used to highlight the active caption (transcription_mode: fill, line). When manually setting the text the following syntax can be used to defined which word/character: 77 | ``` 78 | Hello World 79 | ``` 80 | 81 | #### Outputs 82 | 83 | #### `images` 84 | 85 | The generated images with the specified text and configurations, in common ComfyUI format (compatible with other nodes). 86 | 87 | #### `transcription_framestamps` 88 | 89 | Framestamps formatted based on canvas, font and transcription settings. 90 | Can be useful to manually correct errors by 🎤 Speech Recognition node. 91 | Example: Save this output with 📝 Save/Preview Text -> manually correct mistakes -> remove transcription input from ✒️ Text to Image Generator node -> paste corrected framestamps into text input field of ✒️ Text to Image Generator node. 92 | 93 | 94 |
95 | 96 |
97 | 🆗 Font Properties 98 | 99 | #### Required Inputs 100 | 101 | #### `font_file` 102 | 103 | Fonts located in the custom_nodes\ComfyUI-Mana-Nodes\font_files\example_font.ttf or system font directories (supports .ttf, .otf, .woff, .woff2). 104 | 105 | #### `font_size` 106 | 107 | Either set single value font_size or input animation definition via the ⏰ Scheduled Values node. (Convert font_size to input) 108 | 109 | #### `font_color` 110 | 111 | Either set single color value (CSS3/Color/Extended color keywords) or input animation definition via the 🌈 Preset Color Animations node. (Convert font_color to input) 112 | 113 | #### `x_offset`, `y_offset` 114 | 115 | Either set single horiontal and vertical offset value or input animation definition via the ⏰ Scheduled Values node. (Convert x_offset/y_offset to input) 116 | 117 | #### `rotation` 118 | 119 | Either set single rotation value or input animation definition via the ⏰ Scheduled Values node. (Convert rotation to input) 120 | 121 | #### `rotation_anchor_x`, `rotation_anchor_y` 122 | 123 | Horizontal and vertical offsets of the rotation anchor point, relative to the texts initial position. 124 | 125 | #### `kerning` 126 | 127 | Spacing between characters of font. 128 | 129 | #### `border_width` 130 | 131 | Width of the text border. 132 | 133 | #### `border_color` 134 | 135 | Either set single color value (CSS3/Color/Extended color keywords) or input animation definition via the 🌈 Preset Color Animations node. (Convert border_color to input) 136 | 137 | #### `shadow_color` 138 | 139 | Either set single color value (CSS3/Color/Extended color keywords) or input animation definition via the 🌈 Preset Color Animations node. (Convert shadow_color to input) 140 | 141 | #### `shadow_offset_x`, `shadow_offset_y` 142 | 143 | Horizontal and vertical offset of the text shadow. 144 | 145 | #### Outputs 146 | 147 | #### `font` 148 | 149 | Used as input on ✒️ Text to Image Generator node for the font and highlight_font. 150 | 151 |
152 | 153 |
154 | 🖼️ Canvas Properties 155 | 156 | #### Required Inputs 157 | 158 | #### `height`, `width` 159 | 160 | Dimensions of the canvas. 161 | 162 | #### `background_color` 163 | 164 | Background color of the canvas. (CSS3/Color/Extended color keywords) 165 | 166 | #### `padding` 167 | 168 | Padding between image border and font. 169 | 170 | #### `line_spacing` 171 | 172 | Spacing between lines of text on the canvas. 173 | 174 | #### Optional Inputs 175 | 176 | #### `images` 177 | 178 | Can be used to input images instead of using background_color. 179 | 180 | #### Outputs 181 | 182 | #### `canvas` 183 | 184 | Used as input on ✒️ Text to Image Generator node to define the canvas settings. 185 | 186 |
187 | 188 |
189 | Scheduled Values 190 | 191 | ![Screenshot 2024-04-27 at 17-07-10 ComfyUI](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/ee456e65-9950-4138-8b37-23b007ec92d9) 192 | 193 | 194 | #### Required Inputs 195 | 196 | #### `frame_count` 197 | 198 | Sets the range of the x axis of the chart. (always starts at 1) 199 | 200 | #### `value_range` 201 | 202 | Sets the range of the y axis of the chart. (Example: 25, will would be ranging from -25 to 25) 203 | This can be changed by zooming via the mousewheel and will reset to the specified value if changed. 204 | 205 | #### `easing_type` 206 | 207 | Is used to generate values in between of the manually added values by the user by clicking the Generate Values button. 208 | 209 | The available easing functions are: 210 | 211 | - linear 212 | - easeInQuad 213 | - easeOutQuad 214 | - easeInOutQuad 215 | - easeInCubic 216 | - easeOutCubic 217 | - easeInOutCubic 218 | - easeInQuart 219 | - easeOutQuart 220 | - easeInOutQuart 221 | - easeInQuint 222 | - easeOutQuint 223 | - easeInOutQuint 224 | - exponential 225 | 226 | #### `step_mode` 227 | 228 | The option single will force the chart to display every single tick/step on the chart. 229 | The option auto will automatically remove ticks/step to prevent overlapping. 230 | 231 | #### `animation_reset` 232 | 233 | Used to specify the reset behaviour of the animation. 234 | 235 | - word: animation will be reset when a new word is displayed, stays on last value when animation finished before word change. 236 | - line: animation will be reset when a new line is displayed, stays on last value when animation finished before line change. 237 | - never: animation will just run once and stop on last value. (Not affected by word or line change) 238 | - looped: animation will endlessly loop. (Not affected by word or line change) 239 | - pingpong: animation will first play forward then back and so on. (Not affected by word or line change) 240 | 241 | #### `scheduled_values` 242 | 243 | Adding Values: Click on the chart to add keyframes at specific points. 244 | Editing Values: Double-click on a keyframe to edit its frame and value. 245 | Deleting Values: Click on the delete button associated with each keyframe to remove it. 246 | Generating Values: Click on the "Generate Values" button to interpolate values between existing keyframes. 247 | Deleting Generated Values: Click on the "Delete Generated" button to remove all interpolated values. 248 | 249 | #### Outputs 250 | 251 | #### `scheduled_values` 252 | 253 | Outputs a list of frame and value pairs and the animation_reset option. 254 | At the moment this output can be used to animate the following widgets (Convert property to input) of the 🆗 Font Properties node: 255 | - font_size (font, higlight_font) 256 | - x_offset (font) 257 | - y_offset (font) 258 | - rotation (font) 259 | 260 |
261 | 262 |
263 | 🌈 Preset Color Animations 264 | 265 | #### Required Inputs 266 | 267 | #### `color_preset` 268 | 269 | Currently the following color animation presets are available: 270 | - rainbow 271 | - sunset 272 | - grey 273 | - ocean 274 | - forest 275 | - fire 276 | - sky 277 | - earth 278 | 279 | #### `animation_duration` 280 | 281 | Sets the length of the animation measured as frames. 282 | 283 | #### `animation_reset` 284 | 285 | Used to specify the reset behaviour of the animation. 286 | 287 | - word: animation will be reset when a new word is displayed, stays on last value when animation finished before word change. 288 | - line: animation will be reset when a new line is displayed, stays on last value when animation finished before line change. 289 | - never: animation will just run once and stop on last value. (Not affected by word or line change) 290 | - looped: animation will endlessly loop. (Not affected by word or line change) 291 | - pingpong: animation will first play forward then back and so on. (Not affected by word or line change) 292 | 293 | #### Outputs 294 | 295 | #### `scheduled_colors` 296 | 297 | Outputs a list of frame and color definitions and the animation_reset option. 298 | At the moment this output can be used to animate the following widgets (Convert property to input) of the 🆗 Font Properties node: 299 | - font_color (font, higlight_font) 300 | - border_color (font, higlight_font) 301 | - shadow_color (font, higlight_font) 302 | 303 |
304 | 305 |
306 | 🎤 Speech Recognition 307 | 308 | Converts spoken words in an audio file to text using a deep learning model. 309 | 310 | #### Required Inputs 311 | 312 | #### `audio` 313 | Audio file path or URL. 314 | #### `wav2vec2_model` 315 | The Wav2Vec2 model used for speech recognition. (https://huggingface.co/models?search=wav2vec2) 316 | #### `spell_check_language` 317 | Language for the spell checker. 318 | #### `framestamps_max_chars` 319 | Maximum characters allowed until new framestamp line is created. 320 | 321 | #### Optional Inputs 322 | 323 | #### `fps` 324 | Frames per second, used for synchronizing with video. (Default set to 30) 325 | 326 | #### Outputs 327 | 328 | #### `transcription` 329 | Text transcription of the audio. (Should only be used as font2img transcription input) 330 | #### `raw_string` 331 | Raw string of the transcription without timestamps. 332 | ### `framestamps_string` 333 | Frame-stamped transcription. 334 | ### `timestamps_string` 335 | Transcription with timestamps. 336 | 337 | #### Example Outputs 338 | 339 | #### `raw_string` 340 | Returns the transcribed text as one line. 341 | 342 | ``` 343 | THE GREATEST TRICK THE DEVIL EVER PULLED WAS CONVINCING THE WORLD HE DIDN'T EXIST 344 | ``` 345 | 346 | #### `framestamps_string` 347 | Depending on the framestamps_max_chars parameter the sentece will be cleared and starts to build up again until max_chars is reached again. 348 | - In this example framestamps_max_chars is set to 25. 349 | 350 | ``` 351 | "27": "THE", 352 | "31": "THE GREATEST", 353 | "43": "THE GREATEST TRICK", 354 | "73": "THE GREATEST TRICK THE", 355 | "77": "DEVIL", 356 | "88": "DEVIL EVER", 357 | "94": "DEVIL EVER PULLED", 358 | "127": "DEVIL EVER PULLED WAS", 359 | "133": "CONVINCING", 360 | "150": "CONVINCING THE", 361 | "154": "CONVINCING THE WORLD", 362 | "167": "CONVINCING THE WORLD HE", 363 | "171": "DIDN'T", 364 | "178": "DIDN'T EXIST", 365 | ``` 366 | 367 | #### `timestamps_string` 368 | Returns all transcribed words, their start_time and end_time in json format as a string. 369 | 370 | ``` 371 | [ 372 | { 373 | "word": "THE", 374 | "start_time": 0.9, 375 | "end_time": 0.98 376 | }, 377 | { 378 | "word": "GREATEST", 379 | "start_time": 1.04, 380 | "end_time": 1.36 381 | }, 382 | { 383 | "word": "TRICK", 384 | "start_time": 1.44, 385 | "end_time": 1.68 386 | }, 387 | ... 388 | ] 389 | ``` 390 | 391 |
392 | 393 |
394 | 🎞️ Split Video 395 | 396 | 397 | #### Required Inputs 398 | 399 | #### `video` 400 | Path the video file. 401 | #### `frame_limit` 402 | Maximum number of frames to extract from the video. 403 | #### `frame_start` 404 | Starting frame number for extraction. 405 | #### `filename_prefix` 406 | Prefix for naming the extracted audio file. (relative to .\ComfyUI\output) 407 | 408 | #### Outputs 409 | 410 | #### `frames` 411 | Extracted frames as image tensors. 412 | #### `frame_count` 413 | Total number of frames extracted. 414 | #### `audio_file` 415 | Path of the extracted audio file. 416 | #### `fps` 417 | Frames per second of the video. 418 | #### `height`, `width:` 419 | Dimensions of the extracted frames. 420 | 421 |
422 | 423 |
424 | 🎥 Combine Video 425 | 426 | #### Required Inputs 427 | 428 | #### `frames` 429 | Sequence of images to be used as video frames. 430 | #### `filename_prefix` 431 | Prefix for naming the video file. (relative to .\ComfyUI\output) 432 | #### `fps` 433 | Frames per second for the video. 434 | 435 | #### Optional Inputs 436 | 437 | #### `audio_file` 438 | Audio file path or URL. 439 | 440 | #### Outputs 441 | 442 | #### `video_file` 443 | Path to the created video file. 444 | 445 |
446 | 447 |
448 | 📣 Generate Audio (experimental) 449 | 450 | 451 | Converts text to speech and saves the output as an audio file. 452 | 453 | #### Required Inputs 454 | 455 | #### `text` 456 | The text to be converted into speech. 457 | #### `filename_prefix` 458 | Prefix for naming the audio file. (relative to .\ComfyUI\output) 459 | 460 | This node uses a text-to-speech pipeline to convert input text into spoken words, saving the result as a WAV file. The generated audio file is named using the provided filename prefix and is stored relative to the .\ComfyUI-Mana-Nodes directory. 461 | 462 | Model: [https://huggingface.co/spaces/suno/bark](https://huggingface.co/suno/bark) 463 | 464 | #### Foreign Language 465 | 466 | Bark supports various languages out-of-the-box and automatically determines language from input text. When prompted with code-switched text, Bark will even attempt to employ the native accent for the respective languages in the same voice. 467 | 468 | Example: 469 |
Buenos días Miguel. Tu colega piensa que tu alemán es extremadamente malo. But I suppose your english isn't terrible.
470 | 471 | #### Non-Speech Sounds 472 | 473 | Below is a list of some known non-speech sounds, but we are finding more every day. 474 |
475 | [laughter]
476 | [laughs]
477 | [sighs]
478 | [music]
479 | [gasps]
480 | [clears throat]
481 | — or … for hesitations
482 | ♪ for song lyrics
483 | capitalization for emphasis of a word
484 | MAN/WOMAN: for bias towards speaker
485 | 
486 | 487 | Example: 488 |
" [clears throat] Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as... ♪ singing ♪."
489 | 490 | #### Music 491 | 492 | Bark can generate all types of audio, and, in principle, doesn’t see a difference between speech and music. Sometimes Bark chooses to generate text as music, but you can help it out by adding music notes around your lyrics. 493 | 494 | Example: 495 |
♪ In the jungle, the mighty jungle, the lion barks tonight ♪
496 | 497 | #### Speaker Prompts 498 | 499 | You can provide certain speaker prompts such as NARRATOR, MAN, WOMAN, etc. Please note that these are not always respected, especially if a conflicting audio history prompt is given. 500 | 501 | Example: 502 |
WOMAN: I would like an oatmilk latte please.
503 | MAN: Wow, that's expensive!
504 | 505 | 506 | 507 |
508 |
509 | 📝 Save/Preview Text 510 | 511 | #### Required Inputs 512 | 513 | #### `string` 514 | The string to be written to the file. 515 | #### `filename_prefix` 516 | Prefix for naming the text file. (relative to .\output) 517 | 518 |
519 | 520 | ## Example Workflows 521 | 522 | ### LCM AnimateDiff Text Animation 523 | 524 | #### Demo 525 | 526 | | Demo 1 | Demo 2 | Demo 3 | 527 | | ------ | ------ | ------ | 528 | |![demo1](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/7b77b9cc-457f-4061-ac6c-2f78efb8bffc)|![demo2](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/89bc4309-6c46-4d08-9d9c-521e00415e65)|![demo3](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/ae2e09c5-459c-4b4d-ad71-4db31684573f)| 529 | 530 | 531 | #### Workflow 532 | 533 | [example_workflow_1.json](example_workflows/example_workflow_1.json) 534 | 535 | The values for the ⏰ Scheduled Values node cannot be imported yet (you have to add them yourself). 536 | 537 | ![Screenshot 2024-04-28 at 19-18-01 ComfyUI](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/fa739ab0-91e5-4df7-9bd9-727abb6fb86a) 538 | 539 | ### Speech Recognition Caption Generator 540 | 541 | #### Demo 542 | 543 | Turn on audio. 544 | 545 | https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/e5a39327-db61-46ad-abea-10e27e4551c1 546 | 547 | #### Workflow 548 | 549 | [example_workflow_2.json](example_workflows/example_workflow_2.json) 550 | 551 | ![TRANSCRIPTION](https://github.com/ForeignGods/ComfyUI-Mana-Nodes/assets/78089013/e4d6aa73-3a4b-483e-b763-73b88c8cb261) 552 | 553 | ## To-Do 554 | 555 | - [ ] Improve Speech Recognition 556 | - [ ] Improve Text to Speech 557 | - [ ] Node to download fonts from DaFont.com 558 | - [ ] SVG Loader/Animator 559 | - [ ] Text to Image Generator Alpha Channel 560 | - [ ] Add Font Support for non Latin Characters 561 | - [ ] 3D Effects, Bevel/Emboss, Inner Shading, Fade in/out 562 | - [ ] Find a better way to define color animations 563 | - [ ] Make more Font Properties animatable 564 | 565 | ## Contributing 566 | 567 | Your contributions to improve Mana Nodes are welcome! 568 | 569 | If you have suggestions or enhancements, feel free to fork this repository, apply your changes, and create a pull request. For significant modifications or feature requests, please open an issue first to discuss what you'd like to change. 570 | 571 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- 1 | from .nodes.font2img_node import font2img 2 | from .nodes.speech2text_node import speech2text 3 | from .nodes.video2audio_node import video2audio 4 | from .nodes.string2file_node import string2file 5 | from .nodes.audio2video_node import audio2video 6 | from .nodes.text2speech_node import text2speech 7 | from .nodes.canvas_settings_node import canvas_settings 8 | from .nodes.scheduled_values_node import scheduled_values 9 | from .nodes.color_animations_node import color_animations 10 | from .nodes.text_graphic_element_node import text_graphic_element 11 | from .helpers.logger import logger 12 | 13 | my_logger = logger() 14 | my_logger.error("Mana Web") 15 | 16 | WEB_DIRECTORY = "./web" 17 | 18 | NODE_CLASS_MAPPINGS = { 19 | "Text to Image Generator": font2img, 20 | "Speech Recognition": speech2text, 21 | "Split Video": video2audio, 22 | "Save/Preview Text": string2file, 23 | "Combine Video": audio2video, 24 | "Generate Audio": text2speech, 25 | "Canvas Properties": canvas_settings, 26 | "Font Properties": text_graphic_element, 27 | "Scheduled Values": scheduled_values, 28 | "Preset Color Animations": color_animations 29 | } 30 | 31 | NODE_DISPLAY_NAME_MAPPINGS = { 32 | "Text to Image Generator": "✒️ Text to Image Generator", 33 | "Speech Recognition": "🎤 Speech Recognition", 34 | "Split Video": "🎞️ Split Video", 35 | "Save/Preview Text":"📝 Save/Preview Text", 36 | "Combine Video":"🎥 Combine Video", 37 | "Generate Audio":"📣 Generate Audio", 38 | "Canvas Properties":"🖼️ Canvas Properties", 39 | "Font Properties":"🆗 Font Properties", 40 | "Scheduled Values":"⏰ Scheduled Values", 41 | "Preset Color Animations":"🌈 Preset Color Animations" 42 | } 43 | 44 | __all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"] 45 | -------------------------------------------------------------------------------- /example_workflows/example_workflow_1.json: -------------------------------------------------------------------------------- 1 | { 2 | "last_node_id": 91, 3 | "last_link_id": 129, 4 | "nodes": [ 5 | { 6 | "id": 28, 7 | "type": "Text to Image Generator", 8 | "pos": [ 9 | 10702.52824, 10 | 779.7888500195313 11 | ], 12 | "size": { 13 | "0": 418.1999816894531, 14 | "1": 200 15 | }, 16 | "flags": {}, 17 | "order": 0, 18 | "mode": 0, 19 | "inputs": [ 20 | { 21 | "name": "font", 22 | "type": "TEXT_GRAPHIC_ELEMENT", 23 | "link": null 24 | }, 25 | { 26 | "name": "canvas", 27 | "type": "CANVAS_SETTINGS", 28 | "link": null 29 | }, 30 | { 31 | "name": "transcription", 32 | "type": "TRANSCRIPTION", 33 | "link": null 34 | }, 35 | { 36 | "name": "highlight_font", 37 | "type": "TEXT_GRAPHIC_ELEMENT", 38 | "link": null 39 | } 40 | ], 41 | "outputs": [ 42 | { 43 | "name": "images", 44 | "type": "IMAGE", 45 | "links": null, 46 | "shape": 3 47 | }, 48 | { 49 | "name": "framestamps_string", 50 | "type": "STRING", 51 | "links": null, 52 | "shape": 3 53 | } 54 | ], 55 | "properties": { 56 | "Node name for S&R": "Text to Image Generator" 57 | }, 58 | "widgets_values": [ 59 | "", 60 | 1 61 | ] 62 | }, 63 | { 64 | "id": 13, 65 | "type": "CLIPTextEncode", 66 | "pos": [ 67 | -527, 68 | 740 69 | ], 70 | "size": { 71 | "0": 302.2987976074219, 72 | "1": 209.35433959960938 73 | }, 74 | "flags": {}, 75 | "order": 17, 76 | "mode": 0, 77 | "inputs": [ 78 | { 79 | "name": "clip", 80 | "type": "CLIP", 81 | "link": 121 82 | } 83 | ], 84 | "outputs": [ 85 | { 86 | "name": "CONDITIONING", 87 | "type": "CONDITIONING", 88 | "links": [ 89 | 17 90 | ], 91 | "shape": 3, 92 | "slot_index": 0 93 | } 94 | ], 95 | "properties": { 96 | "Node name for S&R": "CLIPTextEncode" 97 | }, 98 | "widgets_values": [ 99 | "embedding:BadDream, embedding:UnrealisticDream, embedding:easynegative, embedding:ng_deepnegative_v1_75t, " 100 | ], 101 | "color": "#322", 102 | "bgcolor": "#533" 103 | }, 104 | { 105 | "id": 5, 106 | "type": "CheckpointLoaderSimpleWithNoiseSelect", 107 | "pos": [ 108 | -892, 109 | 481 110 | ], 111 | "size": { 112 | "0": 319.20001220703125, 113 | "1": 170 114 | }, 115 | "flags": {}, 116 | "order": 1, 117 | "mode": 0, 118 | "outputs": [ 119 | { 120 | "name": "MODEL", 121 | "type": "MODEL", 122 | "links": [ 123 | 85 124 | ], 125 | "shape": 3, 126 | "slot_index": 0 127 | }, 128 | { 129 | "name": "CLIP", 130 | "type": "CLIP", 131 | "links": [ 132 | 95 133 | ], 134 | "shape": 3, 135 | "slot_index": 1 136 | }, 137 | { 138 | "name": "VAE", 139 | "type": "VAE", 140 | "links": [ 141 | 47 142 | ], 143 | "shape": 3, 144 | "slot_index": 2 145 | } 146 | ], 147 | "properties": { 148 | "Node name for S&R": "CheckpointLoaderSimpleWithNoiseSelect" 149 | }, 150 | "widgets_values": [ 151 | "photonLCM_v10.safetensors", 152 | "lcm >> sqrt_linear", 153 | false, 154 | 0.18215 155 | ], 156 | "color": "#432", 157 | "bgcolor": "#653" 158 | }, 159 | { 160 | "id": 10, 161 | "type": "ADE_UseEvolvedSampling", 162 | "pos": [ 163 | 1569, 164 | 521 165 | ], 166 | "size": { 167 | "0": 335.1678771972656, 168 | "1": 118 169 | }, 170 | "flags": {}, 171 | "order": 15, 172 | "mode": 0, 173 | "inputs": [ 174 | { 175 | "name": "model", 176 | "type": "MODEL", 177 | "link": 122 178 | }, 179 | { 180 | "name": "m_models", 181 | "type": "M_MODELS", 182 | "link": 2 183 | }, 184 | { 185 | "name": "context_options", 186 | "type": "CONTEXT_OPTIONS", 187 | "link": 3 188 | }, 189 | { 190 | "name": "sample_settings", 191 | "type": "SAMPLE_SETTINGS", 192 | "link": null 193 | } 194 | ], 195 | "outputs": [ 196 | { 197 | "name": "MODEL", 198 | "type": "MODEL", 199 | "links": [ 200 | 43 201 | ], 202 | "shape": 3, 203 | "slot_index": 0 204 | } 205 | ], 206 | "properties": { 207 | "Node name for S&R": "ADE_UseEvolvedSampling" 208 | }, 209 | "widgets_values": [ 210 | "lcm >> sqrt_linear" 211 | ], 212 | "color": "#432", 213 | "bgcolor": "#653" 214 | }, 215 | { 216 | "id": 2, 217 | "type": "ADE_LoadAnimateDiffModel", 218 | "pos": [ 219 | 1574, 220 | 689 221 | ], 222 | "size": { 223 | "0": 325.1678771972656, 224 | "1": 101.9017333984375 225 | }, 226 | "flags": {}, 227 | "order": 2, 228 | "mode": 0, 229 | "inputs": [ 230 | { 231 | "name": "ad_settings", 232 | "type": "AD_SETTINGS", 233 | "link": null 234 | } 235 | ], 236 | "outputs": [ 237 | { 238 | "name": "MOTION_MODEL", 239 | "type": "MOTION_MODEL_ADE", 240 | "links": [ 241 | 1 242 | ], 243 | "shape": 3, 244 | "slot_index": 0 245 | } 246 | ], 247 | "properties": { 248 | "Node name for S&R": "ADE_LoadAnimateDiffModel" 249 | }, 250 | "widgets_values": [ 251 | "animatediffLCMMotion_v10.ckpt" 252 | ], 253 | "color": "#432", 254 | "bgcolor": "#653" 255 | }, 256 | { 257 | "id": 4, 258 | "type": "ADE_ApplyAnimateDiffModelSimple", 259 | "pos": [ 260 | 1576, 261 | 852 262 | ], 263 | "size": { 264 | "0": 330.6319885253906, 265 | "1": 106 266 | }, 267 | "flags": {}, 268 | "order": 10, 269 | "mode": 0, 270 | "inputs": [ 271 | { 272 | "name": "motion_model", 273 | "type": "MOTION_MODEL_ADE", 274 | "link": 1 275 | }, 276 | { 277 | "name": "motion_lora", 278 | "type": "MOTION_LORA", 279 | "link": null 280 | }, 281 | { 282 | "name": "scale_multival", 283 | "type": "MULTIVAL", 284 | "link": null 285 | }, 286 | { 287 | "name": "effect_multival", 288 | "type": "MULTIVAL", 289 | "link": null 290 | }, 291 | { 292 | "name": "ad_keyframes", 293 | "type": "AD_KEYFRAMES", 294 | "link": null 295 | } 296 | ], 297 | "outputs": [ 298 | { 299 | "name": "M_MODELS", 300 | "type": "M_MODELS", 301 | "links": [ 302 | 2 303 | ], 304 | "shape": 3, 305 | "slot_index": 0 306 | } 307 | ], 308 | "properties": { 309 | "Node name for S&R": "ADE_ApplyAnimateDiffModelSimple" 310 | }, 311 | "color": "#432", 312 | "bgcolor": "#653" 313 | }, 314 | { 315 | "id": 11, 316 | "type": "ADE_LoopedUniformContextOptions", 317 | "pos": [ 318 | 1577, 319 | 1019 320 | ], 321 | "size": { 322 | "0": 337.77557373046875, 323 | "1": 246 324 | }, 325 | "flags": {}, 326 | "order": 3, 327 | "mode": 0, 328 | "inputs": [ 329 | { 330 | "name": "prev_context", 331 | "type": "CONTEXT_OPTIONS", 332 | "link": null 333 | }, 334 | { 335 | "name": "view_opts", 336 | "type": "VIEW_OPTS", 337 | "link": null 338 | } 339 | ], 340 | "outputs": [ 341 | { 342 | "name": "CONTEXT_OPTS", 343 | "type": "CONTEXT_OPTIONS", 344 | "links": [ 345 | 3 346 | ], 347 | "shape": 3, 348 | "slot_index": 0 349 | } 350 | ], 351 | "properties": { 352 | "Node name for S&R": "ADE_LoopedUniformContextOptions" 353 | }, 354 | "widgets_values": [ 355 | 16, 356 | 1, 357 | 4, 358 | true, 359 | "pyramid", 360 | false, 361 | 0, 362 | 1 363 | ], 364 | "color": "#432", 365 | "bgcolor": "#653" 366 | }, 367 | { 368 | "id": 91, 369 | "type": "Combine Video", 370 | "pos": [ 371 | 1190, 372 | 1025 373 | ], 374 | "size": [ 375 | 315, 376 | 314 377 | ], 378 | "flags": {}, 379 | "order": 21, 380 | "mode": 0, 381 | "inputs": [ 382 | { 383 | "name": "images", 384 | "type": "IMAGE", 385 | "link": 129 386 | }, 387 | { 388 | "name": "audio_file", 389 | "type": "STRING", 390 | "link": null, 391 | "widget": { 392 | "name": "audio_file" 393 | } 394 | } 395 | ], 396 | "outputs": [ 397 | { 398 | "name": "video_file", 399 | "type": "STRING", 400 | "links": null, 401 | "shape": 3 402 | } 403 | ], 404 | "properties": { 405 | "Node name for S&R": "Combine Video" 406 | }, 407 | "widgets_values": [ 408 | "video\\video", 409 | "", 410 | 30, 411 | null 412 | ], 413 | "color": "#223", 414 | "bgcolor": "#335" 415 | }, 416 | { 417 | "id": 19, 418 | "type": "ControlNetLoaderAdvanced", 419 | "pos": [ 420 | 1191, 421 | 739 422 | ], 423 | "size": { 424 | "0": 327.6000061035156, 425 | "1": 58 426 | }, 427 | "flags": {}, 428 | "order": 4, 429 | "mode": 0, 430 | "inputs": [ 431 | { 432 | "name": "timestep_keyframe", 433 | "type": "TIMESTEP_KEYFRAME", 434 | "link": null 435 | } 436 | ], 437 | "outputs": [ 438 | { 439 | "name": "CONTROL_NET", 440 | "type": "CONTROL_NET", 441 | "links": [ 442 | 23 443 | ], 444 | "shape": 3, 445 | "slot_index": 0 446 | } 447 | ], 448 | "properties": { 449 | "Node name for S&R": "ControlNetLoaderAdvanced" 450 | }, 451 | "widgets_values": [ 452 | "depth_cn.safetensors" 453 | ], 454 | "color": "#2a363b", 455 | "bgcolor": "#3f5159" 456 | }, 457 | { 458 | "id": 16, 459 | "type": "ControlNetApplyAdvanced", 460 | "pos": [ 461 | 1180, 462 | 507 463 | ], 464 | "size": { 465 | "0": 332.12725830078125, 466 | "1": 169.1385040283203 467 | }, 468 | "flags": {}, 469 | "order": 20, 470 | "mode": 0, 471 | "inputs": [ 472 | { 473 | "name": "positive", 474 | "type": "CONDITIONING", 475 | "link": 16 476 | }, 477 | { 478 | "name": "negative", 479 | "type": "CONDITIONING", 480 | "link": 17 481 | }, 482 | { 483 | "name": "control_net", 484 | "type": "CONTROL_NET", 485 | "link": 23 486 | }, 487 | { 488 | "name": "image", 489 | "type": "IMAGE", 490 | "link": 83 491 | } 492 | ], 493 | "outputs": [ 494 | { 495 | "name": "positive", 496 | "type": "CONDITIONING", 497 | "links": [ 498 | 44 499 | ], 500 | "shape": 3, 501 | "slot_index": 0 502 | }, 503 | { 504 | "name": "negative", 505 | "type": "CONDITIONING", 506 | "links": [ 507 | 45 508 | ], 509 | "shape": 3, 510 | "slot_index": 1 511 | } 512 | ], 513 | "properties": { 514 | "Node name for S&R": "ControlNetApplyAdvanced" 515 | }, 516 | "widgets_values": [ 517 | 0.65, 518 | 0, 519 | 1 520 | ], 521 | "color": "#2a363b", 522 | "bgcolor": "#3f5159" 523 | }, 524 | { 525 | "id": 50, 526 | "type": "Canvas Properties", 527 | "pos": [ 528 | 718, 529 | 763 530 | ], 531 | "size": { 532 | "0": 315, 533 | "1": 178 534 | }, 535 | "flags": {}, 536 | "order": 18, 537 | "mode": 0, 538 | "inputs": [ 539 | { 540 | "name": "images", 541 | "type": "IMAGE", 542 | "link": 124 543 | } 544 | ], 545 | "outputs": [ 546 | { 547 | "name": "canvas", 548 | "type": "CANVAS_SETTINGS", 549 | "links": [ 550 | 98 551 | ], 552 | "shape": 3, 553 | "slot_index": 0 554 | } 555 | ], 556 | "properties": { 557 | "Node name for S&R": "Canvas Properties" 558 | }, 559 | "widgets_values": [ 560 | 768, 561 | 768, 562 | "black", 563 | "center center", 564 | 0, 565 | 5 566 | ], 567 | "color": "#223", 568 | "bgcolor": "#335" 569 | }, 570 | { 571 | "id": 89, 572 | "type": "Text to Image Generator", 573 | "pos": [ 574 | 721, 575 | 1004 576 | ], 577 | "size": { 578 | "0": 418.1999816894531, 579 | "1": 200 580 | }, 581 | "flags": {}, 582 | "order": 14, 583 | "mode": 0, 584 | "inputs": [ 585 | { 586 | "name": "font", 587 | "type": "TEXT_GRAPHIC_ELEMENT", 588 | "link": 123 589 | }, 590 | { 591 | "name": "canvas", 592 | "type": "CANVAS_SETTINGS", 593 | "link": 125 594 | }, 595 | { 596 | "name": "transcription", 597 | "type": "TRANSCRIPTION", 598 | "link": null 599 | }, 600 | { 601 | "name": "highlight_font", 602 | "type": "TEXT_GRAPHIC_ELEMENT", 603 | "link": null 604 | } 605 | ], 606 | "outputs": [ 607 | { 608 | "name": "images", 609 | "type": "IMAGE", 610 | "links": [ 611 | 124 612 | ], 613 | "shape": 3, 614 | "slot_index": 0 615 | }, 616 | { 617 | "name": "framestamps_string", 618 | "type": "STRING", 619 | "links": null, 620 | "shape": 3 621 | } 622 | ], 623 | "properties": { 624 | "Node name for S&R": "Text to Image Generator" 625 | }, 626 | "widgets_values": [ 627 | "NODES", 628 | 100 629 | ], 630 | "color": "#223", 631 | "bgcolor": "#335" 632 | }, 633 | { 634 | "id": 90, 635 | "type": "Canvas Properties", 636 | "pos": [ 637 | 729, 638 | 1259 639 | ], 640 | "size": { 641 | "0": 315, 642 | "1": 178 643 | }, 644 | "flags": {}, 645 | "order": 5, 646 | "mode": 0, 647 | "inputs": [ 648 | { 649 | "name": "images", 650 | "type": "IMAGE", 651 | "link": null 652 | } 653 | ], 654 | "outputs": [ 655 | { 656 | "name": "canvas", 657 | "type": "CANVAS_SETTINGS", 658 | "links": [ 659 | 125 660 | ], 661 | "shape": 3, 662 | "slot_index": 0 663 | } 664 | ], 665 | "properties": { 666 | "Node name for S&R": "Canvas Properties" 667 | }, 668 | "widgets_values": [ 669 | 768, 670 | 768, 671 | "black", 672 | "center center", 673 | 0, 674 | 5 675 | ], 676 | "color": "#223", 677 | "bgcolor": "#335" 678 | }, 679 | { 680 | "id": 29, 681 | "type": "Text to Image Generator", 682 | "pos": [ 683 | 708, 684 | 506 685 | ], 686 | "size": { 687 | "0": 418.1999816894531, 688 | "1": 200 689 | }, 690 | "flags": {}, 691 | "order": 19, 692 | "mode": 0, 693 | "inputs": [ 694 | { 695 | "name": "font", 696 | "type": "TEXT_GRAPHIC_ELEMENT", 697 | "link": 100 698 | }, 699 | { 700 | "name": "canvas", 701 | "type": "CANVAS_SETTINGS", 702 | "link": 98 703 | }, 704 | { 705 | "name": "transcription", 706 | "type": "TRANSCRIPTION", 707 | "link": null 708 | }, 709 | { 710 | "name": "highlight_font", 711 | "type": "TEXT_GRAPHIC_ELEMENT", 712 | "link": null 713 | } 714 | ], 715 | "outputs": [ 716 | { 717 | "name": "images", 718 | "type": "IMAGE", 719 | "links": [ 720 | 83, 721 | 129 722 | ], 723 | "shape": 3, 724 | "slot_index": 0 725 | }, 726 | { 727 | "name": "framestamps_string", 728 | "type": "STRING", 729 | "links": null, 730 | "shape": 3 731 | } 732 | ], 733 | "properties": { 734 | "Node name for S&R": "Text to Image Generator" 735 | }, 736 | "widgets_values": [ 737 | "MANA", 738 | 100 739 | ], 740 | "color": "#223", 741 | "bgcolor": "#335" 742 | }, 743 | { 744 | "id": 46, 745 | "type": "LoraLoader", 746 | "pos": [ 747 | -889, 748 | 713 749 | ], 750 | "size": { 751 | "0": 315, 752 | "1": 126 753 | }, 754 | "flags": {}, 755 | "order": 13, 756 | "mode": 0, 757 | "inputs": [ 758 | { 759 | "name": "model", 760 | "type": "MODEL", 761 | "link": 85 762 | }, 763 | { 764 | "name": "clip", 765 | "type": "CLIP", 766 | "link": 96 767 | } 768 | ], 769 | "outputs": [ 770 | { 771 | "name": "MODEL", 772 | "type": "MODEL", 773 | "links": [ 774 | 122 775 | ], 776 | "shape": 3, 777 | "slot_index": 0 778 | }, 779 | { 780 | "name": "CLIP", 781 | "type": "CLIP", 782 | "links": [ 783 | 120, 784 | 121 785 | ], 786 | "shape": 3, 787 | "slot_index": 1 788 | } 789 | ], 790 | "properties": { 791 | "Node name for S&R": "LoraLoader" 792 | }, 793 | "widgets_values": [ 794 | "v3_sd15_adapter.ckpt", 795 | 1, 796 | 1 797 | ], 798 | "color": "#323", 799 | "bgcolor": "#535" 800 | }, 801 | { 802 | "id": 48, 803 | "type": "CLIPSetLastLayer", 804 | "pos": [ 805 | -890, 806 | 900 807 | ], 808 | "size": { 809 | "0": 315, 810 | "1": 58 811 | }, 812 | "flags": {}, 813 | "order": 9, 814 | "mode": 0, 815 | "inputs": [ 816 | { 817 | "name": "clip", 818 | "type": "CLIP", 819 | "link": 95 820 | } 821 | ], 822 | "outputs": [ 823 | { 824 | "name": "CLIP", 825 | "type": "CLIP", 826 | "links": [ 827 | 96 828 | ], 829 | "shape": 3, 830 | "slot_index": 0 831 | } 832 | ], 833 | "properties": { 834 | "Node name for S&R": "CLIPSetLastLayer" 835 | }, 836 | "widgets_values": [ 837 | -2 838 | ], 839 | "color": "#323", 840 | "bgcolor": "#535" 841 | }, 842 | { 843 | "id": 6, 844 | "type": "ADE_EmptyLatentImageLarge", 845 | "pos": [ 846 | 1193, 847 | 861 848 | ], 849 | "size": { 850 | "0": 323.27099609375, 851 | "1": 106 852 | }, 853 | "flags": {}, 854 | "order": 6, 855 | "mode": 0, 856 | "inputs": [], 857 | "outputs": [ 858 | { 859 | "name": "LATENT", 860 | "type": "LATENT", 861 | "links": [ 862 | 46 863 | ], 864 | "shape": 3, 865 | "slot_index": 0 866 | } 867 | ], 868 | "properties": { 869 | "Node name for S&R": "ADE_EmptyLatentImageLarge" 870 | }, 871 | "widgets_values": [ 872 | 768, 873 | 768, 874 | 100 875 | ], 876 | "color": "#432", 877 | "bgcolor": "#653" 878 | }, 879 | { 880 | "id": 12, 881 | "type": "CLIPTextEncode", 882 | "pos": [ 883 | -530, 884 | 483 885 | ], 886 | "size": { 887 | "0": 304.7629089355469, 888 | "1": 197.95993041992188 889 | }, 890 | "flags": {}, 891 | "order": 16, 892 | "mode": 0, 893 | "inputs": [ 894 | { 895 | "name": "clip", 896 | "type": "CLIP", 897 | "link": 120 898 | } 899 | ], 900 | "outputs": [ 901 | { 902 | "name": "CONDITIONING", 903 | "type": "CONDITIONING", 904 | "links": [ 905 | 16 906 | ], 907 | "shape": 3, 908 | "slot_index": 0 909 | } 910 | ], 911 | "properties": { 912 | "Node name for S&R": "CLIPTextEncode" 913 | }, 914 | "widgets_values": [ 915 | "water, beach, sea," 916 | ], 917 | "color": "#232", 918 | "bgcolor": "#353" 919 | }, 920 | { 921 | "id": 85, 922 | "type": "Scheduled Values", 923 | "pos": [ 924 | -189, 925 | 478 926 | ], 927 | "size": [ 928 | 477.31434894531117, 929 | 482.393426416015 930 | ], 931 | "flags": {}, 932 | "order": 7, 933 | "mode": 0, 934 | "outputs": [ 935 | { 936 | "name": "scheduled_values", 937 | "type": "INT", 938 | "links": [ 939 | 118, 940 | 126 941 | ], 942 | "shape": 3, 943 | "slot_index": 0 944 | } 945 | ], 946 | "properties": { 947 | "Node name for S&R": "Scheduled Values" 948 | }, 949 | "widgets_values": [ 950 | 100, 951 | 60, 952 | "easeInOutQuint", 953 | "auto", 954 | "word", 955 | 592336, 956 | "[{\"x\":1,\"y\":25},{\"x\":2,\"y\":25},{\"x\":3,\"y\":25},{\"x\":4,\"y\":25},{\"x\":5,\"y\":25},{\"x\":6,\"y\":25},{\"x\":7,\"y\":26},{\"x\":8,\"y\":27},{\"x\":9,\"y\":29},{\"x\":10,\"y\":32},{\"x\":11,\"y\":38},{\"x\":12,\"y\":43},{\"x\":13,\"y\":46},{\"x\":14,\"y\":48},{\"x\":15,\"y\":49},{\"x\":16,\"y\":50},{\"x\":17,\"y\":50},{\"x\":18,\"y\":50},{\"x\":19,\"y\":50},{\"x\":20,\"y\":50},{\"x\":21,\"y\":50},{\"x\":22,\"y\":50},{\"x\":23,\"y\":50},{\"x\":24,\"y\":50},{\"x\":25,\"y\":50},{\"x\":26,\"y\":50},{\"x\":27,\"y\":50},{\"x\":28,\"y\":50},{\"x\":29,\"y\":50},{\"x\":30,\"y\":50},{\"x\":31,\"y\":50},{\"x\":32,\"y\":50},{\"x\":33,\"y\":49},{\"x\":34,\"y\":49},{\"x\":35,\"y\":49},{\"x\":36,\"y\":48},{\"x\":37,\"y\":48},{\"x\":38,\"y\":47},{\"x\":39,\"y\":46},{\"x\":40,\"y\":45},{\"x\":41,\"y\":43},{\"x\":42,\"y\":42},{\"x\":43,\"y\":39},{\"x\":44,\"y\":37},{\"x\":45,\"y\":34},{\"x\":46,\"y\":30},{\"x\":47,\"y\":26},{\"x\":48,\"y\":20},{\"x\":49,\"y\":15},{\"x\":50,\"y\":8},{\"x\":51,\"y\":0},{\"x\":52,\"y\":-8},{\"x\":53,\"y\":-15},{\"x\":54,\"y\":-20},{\"x\":55,\"y\":-26},{\"x\":56,\"y\":-30},{\"x\":57,\"y\":-34},{\"x\":58,\"y\":-37},{\"x\":59,\"y\":-39},{\"x\":60,\"y\":-42},{\"x\":61,\"y\":-43},{\"x\":62,\"y\":-45},{\"x\":63,\"y\":-46},{\"x\":64,\"y\":-47},{\"x\":65,\"y\":-48},{\"x\":66,\"y\":-48},{\"x\":67,\"y\":-49},{\"x\":68,\"y\":-49},{\"x\":69,\"y\":-49},{\"x\":70,\"y\":-50},{\"x\":71,\"y\":-50},{\"x\":72,\"y\":-50},{\"x\":73,\"y\":-50},{\"x\":74,\"y\":-50},{\"x\":75,\"y\":-50},{\"x\":76,\"y\":-50},{\"x\":77,\"y\":-50},{\"x\":78,\"y\":-50},{\"x\":79,\"y\":-50},{\"x\":80,\"y\":-50},{\"x\":81,\"y\":-50},{\"x\":82,\"y\":-50},{\"x\":83,\"y\":-50},{\"x\":84,\"y\":-50},{\"x\":85,\"y\":-50},{\"x\":86,\"y\":-49},{\"x\":87,\"y\":-47},{\"x\":88,\"y\":-44},{\"x\":89,\"y\":-38},{\"x\":90,\"y\":-28},{\"x\":91,\"y\":-12},{\"x\":92,\"y\":3},{\"x\":93,\"y\":13},{\"x\":94,\"y\":19},{\"x\":95,\"y\":22},{\"x\":96,\"y\":24},{\"x\":97,\"y\":25},{\"x\":98,\"y\":25},{\"x\":99,\"y\":25},{\"x\":100,\"y\":25},{\"x\":101,\"y\":25}]", 957 | null 958 | ], 959 | "color": "#223", 960 | "bgcolor": "#335" 961 | }, 962 | { 963 | "id": 87, 964 | "type": "Scheduled Values", 965 | "pos": [ 966 | -206, 967 | 1189 968 | ], 969 | "size": { 970 | "0": 503.4844970703125, 971 | "1": 455.8011169433594 972 | }, 973 | "flags": {}, 974 | "order": 8, 975 | "mode": 0, 976 | "outputs": [ 977 | { 978 | "name": "scheduled_values", 979 | "type": "INT", 980 | "links": [ 981 | 127, 982 | 128 983 | ], 984 | "shape": 3, 985 | "slot_index": 0 986 | } 987 | ], 988 | "properties": { 989 | "Node name for S&R": "Scheduled Values" 990 | }, 991 | "widgets_values": [ 992 | 100, 993 | 60, 994 | "easeInOutQuint", 995 | "auto", 996 | "word", 997 | 374796, 998 | "[{\"x\":1,\"y\":-25},{\"x\":2,\"y\":-25},{\"x\":3,\"y\":-25},{\"x\":4,\"y\":-25},{\"x\":5,\"y\":-25},{\"x\":6,\"y\":-25},{\"x\":7,\"y\":-26},{\"x\":8,\"y\":-27},{\"x\":9,\"y\":-29},{\"x\":10,\"y\":-32},{\"x\":11,\"y\":-37},{\"x\":12,\"y\":-43},{\"x\":13,\"y\":-46},{\"x\":14,\"y\":-48},{\"x\":15,\"y\":-49},{\"x\":16,\"y\":-50},{\"x\":17,\"y\":-50},{\"x\":18,\"y\":-50},{\"x\":19,\"y\":-50},{\"x\":20,\"y\":-50},{\"x\":21,\"y\":-50},{\"x\":22,\"y\":-50},{\"x\":23,\"y\":-50},{\"x\":24,\"y\":-50},{\"x\":25,\"y\":-50},{\"x\":26,\"y\":-50},{\"x\":27,\"y\":-50},{\"x\":28,\"y\":-50},{\"x\":29,\"y\":-50},{\"x\":30,\"y\":-50},{\"x\":31,\"y\":-50},{\"x\":32,\"y\":-50},{\"x\":33,\"y\":-49},{\"x\":34,\"y\":-49},{\"x\":35,\"y\":-49},{\"x\":36,\"y\":-48},{\"x\":37,\"y\":-48},{\"x\":38,\"y\":-47},{\"x\":39,\"y\":-46},{\"x\":40,\"y\":-45},{\"x\":41,\"y\":-43},{\"x\":42,\"y\":-42},{\"x\":43,\"y\":-39},{\"x\":44,\"y\":-37},{\"x\":45,\"y\":-34},{\"x\":46,\"y\":-30},{\"x\":47,\"y\":-26},{\"x\":48,\"y\":-20},{\"x\":49,\"y\":-15},{\"x\":50,\"y\":-8},{\"x\":51,\"y\":0},{\"x\":52,\"y\":8},{\"x\":53,\"y\":15},{\"x\":54,\"y\":20},{\"x\":55,\"y\":26},{\"x\":56,\"y\":30},{\"x\":57,\"y\":34},{\"x\":58,\"y\":37},{\"x\":59,\"y\":39},{\"x\":60,\"y\":42},{\"x\":61,\"y\":43},{\"x\":62,\"y\":45},{\"x\":63,\"y\":46},{\"x\":64,\"y\":47},{\"x\":65,\"y\":48},{\"x\":66,\"y\":48},{\"x\":67,\"y\":49},{\"x\":68,\"y\":49},{\"x\":69,\"y\":49},{\"x\":70,\"y\":50},{\"x\":71,\"y\":50},{\"x\":72,\"y\":50},{\"x\":73,\"y\":50},{\"x\":74,\"y\":50},{\"x\":75,\"y\":50},{\"x\":76,\"y\":50},{\"x\":77,\"y\":50},{\"x\":78,\"y\":50},{\"x\":79,\"y\":50},{\"x\":80,\"y\":50},{\"x\":81,\"y\":50},{\"x\":82,\"y\":50},{\"x\":83,\"y\":50},{\"x\":84,\"y\":50},{\"x\":85,\"y\":50},{\"x\":86,\"y\":49},{\"x\":87,\"y\":47},{\"x\":88,\"y\":44},{\"x\":89,\"y\":38},{\"x\":90,\"y\":28},{\"x\":91,\"y\":13},{\"x\":92,\"y\":-3},{\"x\":93,\"y\":-13},{\"x\":94,\"y\":-19},{\"x\":95,\"y\":-22},{\"x\":96,\"y\":-24},{\"x\":97,\"y\":-25},{\"x\":98,\"y\":-25},{\"x\":99,\"y\":-25},{\"x\":100,\"y\":-25},{\"x\":101,\"y\":-25}]", 999 | null 1000 | ], 1001 | "color": "#223", 1002 | "bgcolor": "#335" 1003 | }, 1004 | { 1005 | "id": 88, 1006 | "type": "Font Properties", 1007 | "pos": [ 1008 | 349, 1009 | 986 1010 | ], 1011 | "size": { 1012 | "0": 315, 1013 | "1": 370 1014 | }, 1015 | "flags": {}, 1016 | "order": 12, 1017 | "mode": 0, 1018 | "inputs": [ 1019 | { 1020 | "name": "x_offset", 1021 | "type": "INT", 1022 | "link": 127, 1023 | "widget": { 1024 | "name": "x_offset" 1025 | } 1026 | }, 1027 | { 1028 | "name": "y_offset", 1029 | "type": "INT", 1030 | "link": 128, 1031 | "widget": { 1032 | "name": "y_offset" 1033 | } 1034 | } 1035 | ], 1036 | "outputs": [ 1037 | { 1038 | "name": "font", 1039 | "type": "TEXT_GRAPHIC_ELEMENT", 1040 | "links": [ 1041 | 123 1042 | ], 1043 | "shape": 3, 1044 | "slot_index": 0 1045 | } 1046 | ], 1047 | "properties": { 1048 | "Node name for S&R": "Font Properties" 1049 | }, 1050 | "widgets_values": [ 1051 | "Arial", 1052 | 145, 1053 | "white", 1054 | 0, 1055 | 0, 1056 | "grey", 1057 | "grey", 1058 | 0, 1059 | 0, 1060 | 0, 1061 | 0, 1062 | 0, 1063 | 0, 1064 | 0 1065 | ], 1066 | "color": "#223", 1067 | "bgcolor": "#335" 1068 | }, 1069 | { 1070 | "id": 49, 1071 | "type": "Font Properties", 1072 | "pos": [ 1073 | 354, 1074 | 503 1075 | ], 1076 | "size": { 1077 | "0": 315, 1078 | "1": 370 1079 | }, 1080 | "flags": {}, 1081 | "order": 11, 1082 | "mode": 0, 1083 | "inputs": [ 1084 | { 1085 | "name": "x_offset", 1086 | "type": "INT", 1087 | "link": 118, 1088 | "widget": { 1089 | "name": "x_offset" 1090 | } 1091 | }, 1092 | { 1093 | "name": "y_offset", 1094 | "type": "INT", 1095 | "link": 126, 1096 | "widget": { 1097 | "name": "y_offset" 1098 | } 1099 | } 1100 | ], 1101 | "outputs": [ 1102 | { 1103 | "name": "font", 1104 | "type": "TEXT_GRAPHIC_ELEMENT", 1105 | "links": [ 1106 | 100 1107 | ], 1108 | "shape": 3, 1109 | "slot_index": 0 1110 | } 1111 | ], 1112 | "properties": { 1113 | "Node name for S&R": "Font Properties" 1114 | }, 1115 | "widgets_values": [ 1116 | "Arial", 1117 | 145, 1118 | "white", 1119 | 0, 1120 | 0, 1121 | "grey", 1122 | "grey", 1123 | 0, 1124 | 0, 1125 | 0, 1126 | 0, 1127 | 0, 1128 | 0, 1129 | 0 1130 | ], 1131 | "color": "#223", 1132 | "bgcolor": "#335" 1133 | }, 1134 | { 1135 | "id": 22, 1136 | "type": "KSampler (Efficient)", 1137 | "pos": [ 1138 | 1935, 1139 | 522 1140 | ], 1141 | "size": { 1142 | "0": 315.0492858886719, 1143 | "1": 562 1144 | }, 1145 | "flags": {}, 1146 | "order": 22, 1147 | "mode": 0, 1148 | "inputs": [ 1149 | { 1150 | "name": "model", 1151 | "type": "MODEL", 1152 | "link": 43 1153 | }, 1154 | { 1155 | "name": "positive", 1156 | "type": "CONDITIONING", 1157 | "link": 44 1158 | }, 1159 | { 1160 | "name": "negative", 1161 | "type": "CONDITIONING", 1162 | "link": 45 1163 | }, 1164 | { 1165 | "name": "latent_image", 1166 | "type": "LATENT", 1167 | "link": 46 1168 | }, 1169 | { 1170 | "name": "optional_vae", 1171 | "type": "VAE", 1172 | "link": 47 1173 | }, 1174 | { 1175 | "name": "script", 1176 | "type": "SCRIPT", 1177 | "link": null 1178 | } 1179 | ], 1180 | "outputs": [ 1181 | { 1182 | "name": "MODEL", 1183 | "type": "MODEL", 1184 | "links": null, 1185 | "shape": 3, 1186 | "slot_index": 0 1187 | }, 1188 | { 1189 | "name": "CONDITIONING+", 1190 | "type": "CONDITIONING", 1191 | "links": null, 1192 | "shape": 3 1193 | }, 1194 | { 1195 | "name": "CONDITIONING-", 1196 | "type": "CONDITIONING", 1197 | "links": null, 1198 | "shape": 3 1199 | }, 1200 | { 1201 | "name": "LATENT", 1202 | "type": "LATENT", 1203 | "links": [], 1204 | "shape": 3, 1205 | "slot_index": 3 1206 | }, 1207 | { 1208 | "name": "VAE", 1209 | "type": "VAE", 1210 | "links": [], 1211 | "shape": 3, 1212 | "slot_index": 4 1213 | }, 1214 | { 1215 | "name": "IMAGE", 1216 | "type": "IMAGE", 1217 | "links": [ 1218 | 54 1219 | ], 1220 | "shape": 3, 1221 | "slot_index": 5 1222 | } 1223 | ], 1224 | "properties": { 1225 | "Node name for S&R": "KSampler (Efficient)" 1226 | }, 1227 | "widgets_values": [ 1228 | 768967982216062, 1229 | null, 1230 | 8, 1231 | 2, 1232 | "lcm", 1233 | "sgm_uniform", 1234 | 1, 1235 | "auto", 1236 | "true" 1237 | ], 1238 | "color": "#332922", 1239 | "bgcolor": "#593930", 1240 | "shape": 1 1241 | }, 1242 | { 1243 | "id": 26, 1244 | "type": "FILM VFI", 1245 | "pos": [ 1246 | 2282, 1247 | 523 1248 | ], 1249 | "size": [ 1250 | 321.0991240625008, 1251 | 170.97481293945316 1252 | ], 1253 | "flags": {}, 1254 | "order": 23, 1255 | "mode": 0, 1256 | "inputs": [ 1257 | { 1258 | "name": "frames", 1259 | "type": "IMAGE", 1260 | "link": 54 1261 | }, 1262 | { 1263 | "name": "optional_interpolation_states", 1264 | "type": "INTERPOLATION_STATES", 1265 | "link": null 1266 | } 1267 | ], 1268 | "outputs": [ 1269 | { 1270 | "name": "IMAGE", 1271 | "type": "IMAGE", 1272 | "links": [ 1273 | 97 1274 | ], 1275 | "shape": 3, 1276 | "slot_index": 0 1277 | } 1278 | ], 1279 | "properties": { 1280 | "Node name for S&R": "FILM VFI" 1281 | }, 1282 | "widgets_values": [ 1283 | "film_net_fp32.pt", 1284 | 10, 1285 | 2 1286 | ], 1287 | "color": "#332922", 1288 | "bgcolor": "#593930" 1289 | }, 1290 | { 1291 | "id": 52, 1292 | "type": "Combine Video", 1293 | "pos": [ 1294 | 2289, 1295 | 746 1296 | ], 1297 | "size": { 1298 | "0": 315, 1299 | "1": 314 1300 | }, 1301 | "flags": {}, 1302 | "order": 24, 1303 | "mode": 0, 1304 | "inputs": [ 1305 | { 1306 | "name": "images", 1307 | "type": "IMAGE", 1308 | "link": 97 1309 | }, 1310 | { 1311 | "name": "audio_file", 1312 | "type": "STRING", 1313 | "link": null, 1314 | "widget": { 1315 | "name": "audio_file" 1316 | } 1317 | } 1318 | ], 1319 | "outputs": [ 1320 | { 1321 | "name": "video_file", 1322 | "type": "STRING", 1323 | "links": null, 1324 | "shape": 3 1325 | } 1326 | ], 1327 | "properties": { 1328 | "Node name for S&R": "Combine Video" 1329 | }, 1330 | "widgets_values": [ 1331 | "video\\video3", 1332 | "", 1333 | 30, 1334 | null 1335 | ], 1336 | "color": "#223", 1337 | "bgcolor": "#335" 1338 | } 1339 | ], 1340 | "links": [ 1341 | [ 1342 | 1, 1343 | 2, 1344 | 0, 1345 | 4, 1346 | 0, 1347 | "MOTION_MODEL_ADE" 1348 | ], 1349 | [ 1350 | 2, 1351 | 4, 1352 | 0, 1353 | 10, 1354 | 1, 1355 | "M_MODELS" 1356 | ], 1357 | [ 1358 | 3, 1359 | 11, 1360 | 0, 1361 | 10, 1362 | 2, 1363 | "CONTEXT_OPTIONS" 1364 | ], 1365 | [ 1366 | 16, 1367 | 12, 1368 | 0, 1369 | 16, 1370 | 0, 1371 | "CONDITIONING" 1372 | ], 1373 | [ 1374 | 17, 1375 | 13, 1376 | 0, 1377 | 16, 1378 | 1, 1379 | "CONDITIONING" 1380 | ], 1381 | [ 1382 | 23, 1383 | 19, 1384 | 0, 1385 | 16, 1386 | 2, 1387 | "CONTROL_NET" 1388 | ], 1389 | [ 1390 | 43, 1391 | 10, 1392 | 0, 1393 | 22, 1394 | 0, 1395 | "MODEL" 1396 | ], 1397 | [ 1398 | 44, 1399 | 16, 1400 | 0, 1401 | 22, 1402 | 1, 1403 | "CONDITIONING" 1404 | ], 1405 | [ 1406 | 45, 1407 | 16, 1408 | 1, 1409 | 22, 1410 | 2, 1411 | "CONDITIONING" 1412 | ], 1413 | [ 1414 | 46, 1415 | 6, 1416 | 0, 1417 | 22, 1418 | 3, 1419 | "LATENT" 1420 | ], 1421 | [ 1422 | 47, 1423 | 5, 1424 | 2, 1425 | 22, 1426 | 4, 1427 | "VAE" 1428 | ], 1429 | [ 1430 | 54, 1431 | 22, 1432 | 5, 1433 | 26, 1434 | 0, 1435 | "IMAGE" 1436 | ], 1437 | [ 1438 | 83, 1439 | 29, 1440 | 0, 1441 | 16, 1442 | 3, 1443 | "IMAGE" 1444 | ], 1445 | [ 1446 | 85, 1447 | 5, 1448 | 0, 1449 | 46, 1450 | 0, 1451 | "MODEL" 1452 | ], 1453 | [ 1454 | 95, 1455 | 5, 1456 | 1, 1457 | 48, 1458 | 0, 1459 | "CLIP" 1460 | ], 1461 | [ 1462 | 96, 1463 | 48, 1464 | 0, 1465 | 46, 1466 | 1, 1467 | "CLIP" 1468 | ], 1469 | [ 1470 | 97, 1471 | 26, 1472 | 0, 1473 | 52, 1474 | 0, 1475 | "IMAGE" 1476 | ], 1477 | [ 1478 | 98, 1479 | 50, 1480 | 0, 1481 | 29, 1482 | 1, 1483 | "CANVAS_SETTINGS" 1484 | ], 1485 | [ 1486 | 100, 1487 | 49, 1488 | 0, 1489 | 29, 1490 | 0, 1491 | "TEXT_GRAPHIC_ELEMENT" 1492 | ], 1493 | [ 1494 | 118, 1495 | 85, 1496 | 0, 1497 | 49, 1498 | 0, 1499 | "INT" 1500 | ], 1501 | [ 1502 | 120, 1503 | 46, 1504 | 1, 1505 | 12, 1506 | 0, 1507 | "CLIP" 1508 | ], 1509 | [ 1510 | 121, 1511 | 46, 1512 | 1, 1513 | 13, 1514 | 0, 1515 | "CLIP" 1516 | ], 1517 | [ 1518 | 122, 1519 | 46, 1520 | 0, 1521 | 10, 1522 | 0, 1523 | "MODEL" 1524 | ], 1525 | [ 1526 | 123, 1527 | 88, 1528 | 0, 1529 | 89, 1530 | 0, 1531 | "TEXT_GRAPHIC_ELEMENT" 1532 | ], 1533 | [ 1534 | 124, 1535 | 89, 1536 | 0, 1537 | 50, 1538 | 0, 1539 | "IMAGE" 1540 | ], 1541 | [ 1542 | 125, 1543 | 90, 1544 | 0, 1545 | 89, 1546 | 1, 1547 | "CANVAS_SETTINGS" 1548 | ], 1549 | [ 1550 | 126, 1551 | 85, 1552 | 0, 1553 | 49, 1554 | 1, 1555 | "INT" 1556 | ], 1557 | [ 1558 | 127, 1559 | 87, 1560 | 0, 1561 | 88, 1562 | 0, 1563 | "INT" 1564 | ], 1565 | [ 1566 | 128, 1567 | 87, 1568 | 0, 1569 | 88, 1570 | 1, 1571 | "INT" 1572 | ], 1573 | [ 1574 | 129, 1575 | 29, 1576 | 0, 1577 | 91, 1578 | 0, 1579 | "IMAGE" 1580 | ] 1581 | ], 1582 | "groups": [], 1583 | "config": {}, 1584 | "extra": {}, 1585 | "version": 0.4 1586 | } -------------------------------------------------------------------------------- /example_workflows/example_workflow_2.json: -------------------------------------------------------------------------------- 1 | { 2 | "last_node_id": 23, 3 | "last_link_id": 35, 4 | "nodes": [ 5 | { 6 | "id": 16, 7 | "type": "Scheduled Values", 8 | "pos": [ 9 | -52, 10 | 54 11 | ], 12 | "size": [ 13 | 367.58266601562514, 14 | 386.97208496093754 15 | ], 16 | "flags": {}, 17 | "order": 0, 18 | "mode": 0, 19 | "outputs": [ 20 | { 21 | "name": "scheduled_values", 22 | "type": "INT", 23 | "links": [ 24 | 24 25 | ], 26 | "shape": 3, 27 | "slot_index": 0 28 | } 29 | ], 30 | "properties": { 31 | "Node name for S&R": "Scheduled Values" 32 | }, 33 | "widgets_values": [ 34 | 30, 35 | 15, 36 | "easeOutQuint", 37 | "auto", 38 | "word", 39 | 323576, 40 | "[{\"x\":1,\"y\":30},{\"x\":2,\"y\":17},{\"x\":3,\"y\":9},{\"x\":4,\"y\":4},{\"x\":5,\"y\":2},{\"x\":6,\"y\":1},{\"x\":7,\"y\":0},{\"x\":8,\"y\":0},{\"x\":9,\"y\":0},{\"x\":10,\"y\":0}]", 41 | null 42 | ], 43 | "color": "#223", 44 | "bgcolor": "#335" 45 | }, 46 | { 47 | "id": 17, 48 | "type": "Scheduled Values", 49 | "pos": [ 50 | -42, 51 | 631 52 | ], 53 | "size": [ 54 | 356.1326611328127, 55 | 389.0420849609377 56 | ], 57 | "flags": {}, 58 | "order": 1, 59 | "mode": 0, 60 | "outputs": [ 61 | { 62 | "name": "scheduled_values", 63 | "type": "INT", 64 | "links": [ 65 | 25 66 | ], 67 | "shape": 3, 68 | "slot_index": 0 69 | } 70 | ], 71 | "properties": { 72 | "Node name for S&R": "Scheduled Values" 73 | }, 74 | "widgets_values": [ 75 | 40, 76 | 15, 77 | "easeOutQuint", 78 | "auto", 79 | "word", 80 | 790904, 81 | "[{\"x\":1,\"y\":100},{\"x\":2,\"y\":122},{\"x\":3,\"y\":136},{\"x\":4,\"y\":143},{\"x\":5,\"y\":147},{\"x\":6,\"y\":149},{\"x\":7,\"y\":150},{\"x\":8,\"y\":150},{\"x\":9,\"y\":150},{\"x\":10,\"y\":150}]", 82 | null 83 | ], 84 | "color": "#223", 85 | "bgcolor": "#335" 86 | }, 87 | { 88 | "id": 8, 89 | "type": "Split Video", 90 | "pos": [ 91 | -990, 92 | 24 93 | ], 94 | "size": [ 95 | 319.63796105468737, 96 | 475.0980966601561 97 | ], 98 | "flags": {}, 99 | "order": 2, 100 | "mode": 0, 101 | "outputs": [ 102 | { 103 | "name": "images", 104 | "type": "IMAGE", 105 | "links": [ 106 | 17 107 | ], 108 | "shape": 3, 109 | "slot_index": 0 110 | }, 111 | { 112 | "name": "audio_file", 113 | "type": "STRING", 114 | "links": [ 115 | 1, 116 | 12 117 | ], 118 | "shape": 3, 119 | "slot_index": 1 120 | }, 121 | { 122 | "name": "fps", 123 | "type": "INT", 124 | "links": [ 125 | 13, 126 | 19 127 | ], 128 | "shape": 3, 129 | "slot_index": 2 130 | }, 131 | { 132 | "name": "frame_count", 133 | "type": "INT", 134 | "links": [ 135 | 14 136 | ], 137 | "shape": 3, 138 | "slot_index": 3 139 | }, 140 | { 141 | "name": "height", 142 | "type": "INT", 143 | "links": [ 144 | 15 145 | ], 146 | "shape": 3, 147 | "slot_index": 4 148 | }, 149 | { 150 | "name": "width", 151 | "type": "INT", 152 | "links": [ 153 | 16 154 | ], 155 | "shape": 3, 156 | "slot_index": 5 157 | } 158 | ], 159 | "properties": { 160 | "Node name for S&R": "Split Video" 161 | }, 162 | "widgets_values": [ 163 | "video/Guts tells you that you’re gonna be alright.mp4", 164 | 650, 165 | 150, 166 | "audio\\audio", 167 | "image", 168 | null 169 | ], 170 | "color": "#233", 171 | "bgcolor": "#355" 172 | }, 173 | { 174 | "id": 2, 175 | "type": "Font Properties", 176 | "pos": [ 177 | 363, 178 | 51 179 | ], 180 | "size": [ 181 | 210, 182 | 338 183 | ], 184 | "flags": {}, 185 | "order": 6, 186 | "mode": 0, 187 | "inputs": [ 188 | { 189 | "name": "rotation", 190 | "type": "INT", 191 | "link": 24, 192 | "widget": { 193 | "name": "rotation" 194 | } 195 | }, 196 | { 197 | "name": "font_size", 198 | "type": "INT", 199 | "link": 25, 200 | "widget": { 201 | "name": "font_size" 202 | } 203 | }, 204 | { 205 | "name": "border_color", 206 | "type": "STRING", 207 | "link": 26, 208 | "widget": { 209 | "name": "border_color" 210 | } 211 | } 212 | ], 213 | "outputs": [ 214 | { 215 | "name": "font", 216 | "type": "TEXT_GRAPHIC_ELEMENT", 217 | "links": [ 218 | 6 219 | ], 220 | "shape": 3, 221 | "slot_index": 0 222 | } 223 | ], 224 | "properties": { 225 | "Node name for S&R": "Font Properties" 226 | }, 227 | "widgets_values": [ 228 | "Doctor Glitch", 229 | 175, 230 | "black", 231 | 0, 232 | 4, 233 | "red", 234 | "red", 235 | 0, 236 | 0, 237 | 0, 238 | 0, 239 | 0, 240 | 0, 241 | 0 242 | ], 243 | "color": "#223", 244 | "bgcolor": "#335" 245 | }, 246 | { 247 | "id": 4, 248 | "type": "Preset Color Animations", 249 | "pos": [ 250 | 366, 251 | 444 252 | ], 253 | "size": [ 254 | 218.39999389648438, 255 | 106 256 | ], 257 | "flags": {}, 258 | "order": 3, 259 | "mode": 0, 260 | "outputs": [ 261 | { 262 | "name": "scheduled_colors", 263 | "type": "STRING", 264 | "links": [ 265 | 26 266 | ], 267 | "shape": 3, 268 | "slot_index": 0 269 | } 270 | ], 271 | "properties": { 272 | "Node name for S&R": "Preset Color Animations" 273 | }, 274 | "widgets_values": [ 275 | "fire", 276 | 30, 277 | "pingpong" 278 | ], 279 | "color": "#223", 280 | "bgcolor": "#335" 281 | }, 282 | { 283 | "id": 1, 284 | "type": "Canvas Properties", 285 | "pos": [ 286 | 367, 287 | 610 288 | ], 289 | "size": [ 290 | 210, 291 | 191.3520373535157 292 | ], 293 | "flags": {}, 294 | "order": 5, 295 | "mode": 0, 296 | "inputs": [ 297 | { 298 | "name": "images", 299 | "type": "IMAGE", 300 | "link": 17 301 | }, 302 | { 303 | "name": "height", 304 | "type": "INT", 305 | "link": 15, 306 | "widget": { 307 | "name": "height" 308 | } 309 | }, 310 | { 311 | "name": "width", 312 | "type": "INT", 313 | "link": 16, 314 | "widget": { 315 | "name": "width" 316 | } 317 | } 318 | ], 319 | "outputs": [ 320 | { 321 | "name": "canvas", 322 | "type": "CANVAS_SETTINGS", 323 | "links": [ 324 | 8 325 | ], 326 | "shape": 3, 327 | "slot_index": 0 328 | } 329 | ], 330 | "properties": { 331 | "Node name for S&R": "Canvas Properties" 332 | }, 333 | "widgets_values": [ 334 | 430, 335 | 512, 336 | "black", 337 | "center center", 338 | 0, 339 | 5 340 | ], 341 | "color": "#223", 342 | "bgcolor": "#335" 343 | }, 344 | { 345 | "id": 7, 346 | "type": "Combine Video", 347 | "pos": [ 348 | 612, 349 | 320 350 | ], 351 | "size": [ 352 | 410.9910888671875, 353 | 351.1099760742187 354 | ], 355 | "flags": {}, 356 | "order": 11, 357 | "mode": 0, 358 | "inputs": [ 359 | { 360 | "name": "images", 361 | "type": "IMAGE", 362 | "link": 11 363 | }, 364 | { 365 | "name": "audio_file", 366 | "type": "STRING", 367 | "link": 12, 368 | "widget": { 369 | "name": "audio_file" 370 | } 371 | }, 372 | { 373 | "name": "fps", 374 | "type": "INT", 375 | "link": 13, 376 | "widget": { 377 | "name": "fps" 378 | } 379 | } 380 | ], 381 | "outputs": [ 382 | { 383 | "name": "video_file", 384 | "type": "STRING", 385 | "links": null, 386 | "shape": 3 387 | } 388 | ], 389 | "properties": { 390 | "Node name for S&R": "Combine Video" 391 | }, 392 | "widgets_values": [ 393 | "video\\video", 394 | "", 395 | 30, 396 | null 397 | ], 398 | "color": "#233", 399 | "bgcolor": "#355" 400 | }, 401 | { 402 | "id": 9, 403 | "type": "Text to Image Generator", 404 | "pos": [ 405 | 605, 406 | 54 407 | ], 408 | "size": { 409 | "0": 418.1999816894531, 410 | "1": 200 411 | }, 412 | "flags": {}, 413 | "order": 10, 414 | "mode": 0, 415 | "inputs": [ 416 | { 417 | "name": "font", 418 | "type": "TEXT_GRAPHIC_ELEMENT", 419 | "link": 6 420 | }, 421 | { 422 | "name": "canvas", 423 | "type": "CANVAS_SETTINGS", 424 | "link": 8 425 | }, 426 | { 427 | "name": "transcription", 428 | "type": "TRANSCRIPTION", 429 | "link": 35 430 | }, 431 | { 432 | "name": "highlight_font", 433 | "type": "TEXT_GRAPHIC_ELEMENT", 434 | "link": null 435 | }, 436 | { 437 | "name": "frame_count", 438 | "type": "INT", 439 | "link": 14, 440 | "widget": { 441 | "name": "frame_count" 442 | } 443 | } 444 | ], 445 | "outputs": [ 446 | { 447 | "name": "images", 448 | "type": "IMAGE", 449 | "links": [ 450 | 11 451 | ], 452 | "shape": 3, 453 | "slot_index": 0 454 | }, 455 | { 456 | "name": "framestamps_string", 457 | "type": "STRING", 458 | "links": [ 459 | 9 460 | ], 461 | "shape": 3, 462 | "slot_index": 1 463 | } 464 | ], 465 | "properties": { 466 | "Node name for S&R": "Text to Image Generator" 467 | }, 468 | "widgets_values": [ 469 | "\"13\": \"YOU'RE\",\n\"19\": \"GOING\",\n\"26\": \"TO\",\n\"30\": \"BE\",\n\"35\": \"ALL\",\n\"42\": \"RIGHT\",\n\"91\": \"YOU\",\n\"97\": \"JUST\",\n\"103\": \"STUMBLED\",\n\"118\": \"OVER\",\n\"126\": \"A\",\n\"130\": \"STONE\",\n\"143\": \"IN\",\n\"146\": \"THE\",\n\"150\": \"ROAD\",\n\"201\": \"IT\",\n\"206\": \"MEANS\",\n\"221\": \"NOTHING\",\n\"275\": \"YOUR\",\n\"283\": \"GOAL\",\n\"293\": \"LAYS\",\n\"304\": \"FAR\",\n\"313\": \"BEYOND\",\n\"326\": \"THIS\",\n\"364\": \"DOESN'T\",\n\"374\": \"IT\",\n\"424\": \"IM\",\n\"432\": \"SURE\",\n\"439\": \"YOU'LL\",\n\"448\": \"OVERCOME\",\n\"463\": \"THIS\",\n\"504\": \"YOU'LL\",\n\"513\": \"WALK\",\n\"520\": \"AGAIN\",\n\"568\": \"SOON\",", 470 | 1 471 | ], 472 | "color": "#432", 473 | "bgcolor": "#653" 474 | }, 475 | { 476 | "id": 22, 477 | "type": "Save/Preview Text", 478 | "pos": [ 479 | -631, 480 | 356 481 | ], 482 | "size": [ 483 | 310.36296875000005, 484 | 157.292108154297 485 | ], 486 | "flags": {}, 487 | "order": 7, 488 | "mode": 0, 489 | "inputs": [ 490 | { 491 | "name": "string", 492 | "type": "STRING", 493 | "link": 33, 494 | "widget": { 495 | "name": "string" 496 | } 497 | } 498 | ], 499 | "properties": { 500 | "Node name for S&R": "Save/Preview Text" 501 | }, 502 | "widgets_values": [ 503 | "text\\text", 504 | "", 505 | "YOU'RE GOING TO BE ALLYL RRIIIGHHT YOU JUST SSTTUMBLLLED OVER A SSSTTONEE IN THE ROAD IT MEANNESS NNOOTHHIINNG YOUR GOLD LESS FARER BEYYONNDD THIS DOESN'T IT ITEM SSSUUREE YOU'LL OVEERCCOMEE THIS YOU'LL WALK AGAIN SO" 506 | ], 507 | "color": "#232", 508 | "bgcolor": "#353" 509 | }, 510 | { 511 | "id": 23, 512 | "type": "Save/Preview Text", 513 | "pos": [ 514 | -639, 515 | 555 516 | ], 517 | "size": [ 518 | 315.9429687500001, 519 | 345.49210815429694 520 | ], 521 | "flags": {}, 522 | "order": 8, 523 | "mode": 0, 524 | "inputs": [ 525 | { 526 | "name": "string", 527 | "type": "STRING", 528 | "link": 34, 529 | "widget": { 530 | "name": "string" 531 | } 532 | } 533 | ], 534 | "properties": { 535 | "Node name for S&R": "Save/Preview Text" 536 | }, 537 | "widgets_values": [ 538 | "text\\text", 539 | "", 540 | "\"13\": \"YOU'RE\",\n\"19\": \"YOU'RE GOING\",\n\"26\": \"YOU'RE GOING TO\",\n\"30\": \"YOU'RE GOING TO BE\",\n\"35\": \"YOU'RE GOING TO BE ALLYL\",\n\"42\": \"YOU'RE GOING TO BE ALLYL RRIIIGHHT\",\n\"91\": \"YOU'RE GOING TO BE ALLYL RRIIIGHHT YOU\",\n\"97\": \"YOU'RE GOING TO BE ALLYL RRIIIGHHT YOU JUST\",\n\"103\": \"SSTTUMBLLLED\",\n\"118\": \"SSTTUMBLLLED OVER\",\n\"126\": \"SSTTUMBLLLED OVER A\",\n\"130\": \"SSTTUMBLLLED OVER A SSSTTONEE\",\n\"143\": \"SSTTUMBLLLED OVER A SSSTTONEE IN\",\n\"146\": \"SSTTUMBLLLED OVER A SSSTTONEE IN THE\",\n\"150\": \"SSTTUMBLLLED OVER A SSSTTONEE IN THE ROAD\",\n\"201\": \"SSTTUMBLLLED OVER A SSSTTONEE IN THE ROAD IT\",\n\"206\": \"MEANNESS\",\n\"221\": \"MEANNESS NNOOTHHIINNG\",\n\"275\": \"MEANNESS NNOOTHHIINNG YOUR\",\n\"283\": \"MEANNESS NNOOTHHIINNG YOUR GOLD\",\n\"293\": \"MEANNESS NNOOTHHIINNG YOUR GOLD LESS\",\n\"304\": \"MEANNESS NNOOTHHIINNG YOUR GOLD LESS FARER\",\n\"313\": \"BEYYONNDD\",\n\"326\": \"BEYYONNDD THIS\",\n\"364\": \"BEYYONNDD THIS DOESN'T\",\n\"374\": \"BEYYONNDD THIS DOESN'T IT\",\n\"424\": \"BEYYONNDD THIS DOESN'T IT ITEM\",\n\"432\": \"BEYYONNDD THIS DOESN'T IT ITEM SSSUUREE\",\n\"439\": \"BEYYONNDD THIS DOESN'T IT ITEM SSSUUREE YOU'LL\",\n\"448\": \"OVEERCCOMEE\",\n\"463\": \"OVEERCCOMEE THIS\",\n\"504\": \"OVEERCCOMEE THIS YOU'LL\",\n\"513\": \"OVEERCCOMEE THIS YOU'LL WALK\",\n\"520\": \"OVEERCCOMEE THIS YOU'LL WALK AGAIN\",\n\"568\": \"OVEERCCOMEE THIS YOU'LL WALK AGAIN SO\",\n" 541 | ], 542 | "color": "#232", 543 | "bgcolor": "#353" 544 | }, 545 | { 546 | "id": 21, 547 | "type": "Save/Preview Text", 548 | "pos": [ 549 | -295, 550 | 46 551 | ], 552 | "size": [ 553 | 213.9429687500001, 554 | 858.1221081542969 555 | ], 556 | "flags": {}, 557 | "order": 9, 558 | "mode": 0, 559 | "inputs": [ 560 | { 561 | "name": "string", 562 | "type": "STRING", 563 | "link": 32, 564 | "widget": { 565 | "name": "string" 566 | } 567 | } 568 | ], 569 | "properties": { 570 | "Node name for S&R": "Save/Preview Text" 571 | }, 572 | "widgets_values": [ 573 | "text\\text", 574 | "", 575 | "[\n {\n \"word\": \"YOU'RE\",\n \"start_time\": 0.42,\n \"end_time\": 0.6\n },\n {\n \"word\": \"GOING\",\n \"start_time\": 0.64,\n \"end_time\": 0.78\n },\n {\n \"word\": \"TO\",\n \"start_time\": 0.86,\n \"end_time\": 0.9\n },\n {\n \"word\": \"BE\",\n \"start_time\": 1.0,\n \"end_time\": 1.04\n },\n {\n \"word\": \"ALLYL\",\n \"start_time\": 1.18,\n \"end_time\": 1.32\n },\n {\n \"word\": \"RRIIIGHHT\",\n \"start_time\": 1.4,\n \"end_time\": 1.62\n },\n {\n \"word\": \"YOU\",\n \"start_time\": 3.04,\n \"end_time\": 3.18\n },\n {\n \"word\": \"JUST\",\n \"start_time\": 3.24,\n \"end_time\": 3.36\n },\n {\n \"word\": \"SSTTUMBLLLED\",\n \"start_time\": 3.46,\n \"end_time\": 3.86\n },\n {\n \"word\": \"OVER\",\n \"start_time\": 3.96,\n \"end_time\": 4.12\n },\n {\n \"word\": \"A\",\n \"start_time\": 4.22,\n \"end_time\": 4.22\n },\n {\n \"word\": \"SSSTTONEE\",\n \"start_time\": 4.36,\n \"end_time\": 4.72\n },\n {\n \"word\": \"IN\",\n \"start_time\": 4.8,\n \"end_time\": 4.82\n },\n {\n \"word\": \"THE\",\n \"start_time\": 4.9,\n \"end_time\": 4.94\n },\n {\n \"word\": \"ROAD\",\n \"start_time\": 5.04,\n \"end_time\": 5.3\n },\n {\n \"word\": \"IT\",\n \"start_time\": 6.74,\n \"end_time\": 6.78\n },\n {\n \"word\": \"MEANNESS\",\n \"start_time\": 6.92,\n \"end_time\": 7.2\n },\n {\n \"word\": \"NNOOTHHIINNG\",\n \"start_time\": 7.4,\n \"end_time\": 7.76\n },\n {\n \"word\": \"YOUR\",\n \"start_time\": 9.24,\n \"end_time\": 9.38\n },\n {\n \"word\": \"GOLD\",\n \"start_time\": 9.5,\n \"end_time\": 9.74\n },\n {\n \"word\": \"LESS\",\n \"start_time\": 9.84,\n \"end_time\": 10.04\n },\n {\n \"word\": \"FARER\",\n \"start_time\": 10.2,\n \"end_time\": 10.4\n },\n {\n \"word\": \"BEYYONNDD\",\n \"start_time\": 10.52,\n \"end_time\": 10.82\n },\n {\n \"word\": \"THIS\",\n \"start_time\": 10.94,\n \"end_time\": 11.12\n },\n {\n \"word\": \"DOESN'T\",\n \"start_time\": 12.2,\n \"end_time\": 12.48\n },\n {\n \"word\": \"IT\",\n \"start_time\": 12.56,\n \"end_time\": 12.58\n },\n {\n \"word\": \"ITEM\",\n \"start_time\": 14.22,\n \"end_time\": 14.32\n },\n {\n \"word\": \"SSSUUREE\",\n \"start_time\": 14.48,\n \"end_time\": 14.68\n },\n {\n \"word\": \"YOU'LL\",\n \"start_time\": 14.74,\n \"end_time\": 14.94\n },\n {\n \"word\": \"OVEERCCOMEE\",\n \"start_time\": 15.02,\n \"end_time\": 15.42\n },\n {\n \"word\": \"THIS\",\n \"start_time\": 15.54,\n \"end_time\": 15.72\n },\n {\n \"word\": \"YOU'LL\",\n \"start_time\": 16.9,\n \"end_time\": 17.14\n },\n {\n \"word\": \"WALK\",\n \"start_time\": 17.22,\n \"end_time\": 17.34\n },\n {\n \"word\": \"AGAIN\",\n \"start_time\": 17.46,\n \"end_time\": 17.66\n },\n {\n \"word\": \"SO\",\n \"start_time\": 19.06,\n \"end_time\": 19.16\n }\n]" 576 | ], 577 | "color": "#232", 578 | "bgcolor": "#353" 579 | }, 580 | { 581 | "id": 13, 582 | "type": "Save/Preview Text", 583 | "pos": [ 584 | 1050, 585 | 59 586 | ], 587 | "size": [ 588 | 210, 589 | 609.6520812988282 590 | ], 591 | "flags": {}, 592 | "order": 12, 593 | "mode": 0, 594 | "inputs": [ 595 | { 596 | "name": "string", 597 | "type": "STRING", 598 | "link": 9, 599 | "widget": { 600 | "name": "string" 601 | } 602 | } 603 | ], 604 | "properties": { 605 | "Node name for S&R": "Save/Preview Text" 606 | }, 607 | "widgets_values": [ 608 | "text\\text", 609 | "", 610 | "\"13\": \"YOU'RE\",\n\"19\": \"GOING\",\n\"26\": \"TO\",\n\"30\": \"BE\",\n\"35\": \"ALL\",\n\"42\": \"RIGHT\",\n\"91\": \"YOU\",\n\"97\": \"JUST\",\n\"103\": \"STUMBLED\",\n\"118\": \"OVER\",\n\"126\": \"A\",\n\"130\": \"STONE\",\n\"143\": \"IN\",\n\"146\": \"THE\",\n\"150\": \"ROAD\",\n\"201\": \"IT\",\n\"206\": \"MEANS\",\n\"221\": \"NOTHING\",\n\"275\": \"YOUR\",\n\"283\": \"GOAL\",\n\"293\": \"LAYS\",\n\"304\": \"FAR\",\n\"313\": \"BEYOND\",\n\"326\": \"THIS\",\n\"364\": \"DOESN'T\",\n\"374\": \"IT\",\n\"424\": \"IM\",\n\"432\": \"SURE\",\n\"439\": \"YOU'LL\",\n\"448\": \"OVERCOME\",\n\"463\": \"THIS\",\n\"504\": \"YOU'LL\",\n\"513\": \"WALK\",\n\"520\": \"AGAIN\",\n\"568\": \"SOON\"," 611 | ], 612 | "color": "#232", 613 | "bgcolor": "#353" 614 | }, 615 | { 616 | "id": 5, 617 | "type": "Speech Recognition", 618 | "pos": [ 619 | -636, 620 | 45 621 | ], 622 | "size": { 623 | "0": 315, 624 | "1": 262 625 | }, 626 | "flags": {}, 627 | "order": 4, 628 | "mode": 0, 629 | "inputs": [ 630 | { 631 | "name": "audio_file", 632 | "type": "STRING", 633 | "link": 1, 634 | "widget": { 635 | "name": "audio_file" 636 | } 637 | }, 638 | { 639 | "name": "fps", 640 | "type": "INT", 641 | "link": 19, 642 | "widget": { 643 | "name": "fps" 644 | } 645 | } 646 | ], 647 | "outputs": [ 648 | { 649 | "name": "transcription", 650 | "type": "TRANSCRIPTION", 651 | "links": [ 652 | 35 653 | ], 654 | "shape": 3, 655 | "slot_index": 0 656 | }, 657 | { 658 | "name": "raw_string", 659 | "type": "STRING", 660 | "links": [ 661 | 33 662 | ], 663 | "shape": 3, 664 | "slot_index": 1 665 | }, 666 | { 667 | "name": "framestamps_string", 668 | "type": "STRING", 669 | "links": [ 670 | 34 671 | ], 672 | "shape": 3, 673 | "slot_index": 2 674 | }, 675 | { 676 | "name": "timestamps_string", 677 | "type": "STRING", 678 | "links": [ 679 | 32 680 | ], 681 | "shape": 3, 682 | "slot_index": 3 683 | } 684 | ], 685 | "properties": { 686 | "Node name for S&R": "Speech Recognition" 687 | }, 688 | "widgets_values": [ 689 | "", 690 | "facebook/wav2vec2-base-960h", 691 | "English", 692 | 50, 693 | 30, 694 | "word", 695 | true 696 | ], 697 | "color": "#432", 698 | "bgcolor": "#653" 699 | } 700 | ], 701 | "links": [ 702 | [ 703 | 1, 704 | 8, 705 | 1, 706 | 5, 707 | 0, 708 | "STRING" 709 | ], 710 | [ 711 | 6, 712 | 2, 713 | 0, 714 | 9, 715 | 0, 716 | "TEXT_GRAPHIC_ELEMENT" 717 | ], 718 | [ 719 | 8, 720 | 1, 721 | 0, 722 | 9, 723 | 1, 724 | "CANVAS_SETTINGS" 725 | ], 726 | [ 727 | 9, 728 | 9, 729 | 1, 730 | 13, 731 | 0, 732 | "STRING" 733 | ], 734 | [ 735 | 11, 736 | 9, 737 | 0, 738 | 7, 739 | 0, 740 | "IMAGE" 741 | ], 742 | [ 743 | 12, 744 | 8, 745 | 1, 746 | 7, 747 | 1, 748 | "STRING" 749 | ], 750 | [ 751 | 13, 752 | 8, 753 | 2, 754 | 7, 755 | 2, 756 | "INT" 757 | ], 758 | [ 759 | 14, 760 | 8, 761 | 3, 762 | 9, 763 | 4, 764 | "INT" 765 | ], 766 | [ 767 | 15, 768 | 8, 769 | 4, 770 | 1, 771 | 1, 772 | "INT" 773 | ], 774 | [ 775 | 16, 776 | 8, 777 | 5, 778 | 1, 779 | 2, 780 | "INT" 781 | ], 782 | [ 783 | 17, 784 | 8, 785 | 0, 786 | 1, 787 | 0, 788 | "IMAGE" 789 | ], 790 | [ 791 | 19, 792 | 8, 793 | 2, 794 | 5, 795 | 1, 796 | "INT" 797 | ], 798 | [ 799 | 24, 800 | 16, 801 | 0, 802 | 2, 803 | 0, 804 | "INT" 805 | ], 806 | [ 807 | 25, 808 | 17, 809 | 0, 810 | 2, 811 | 1, 812 | "INT" 813 | ], 814 | [ 815 | 26, 816 | 4, 817 | 0, 818 | 2, 819 | 2, 820 | "STRING" 821 | ], 822 | [ 823 | 32, 824 | 5, 825 | 3, 826 | 21, 827 | 0, 828 | "STRING" 829 | ], 830 | [ 831 | 33, 832 | 5, 833 | 1, 834 | 22, 835 | 0, 836 | "STRING" 837 | ], 838 | [ 839 | 34, 840 | 5, 841 | 2, 842 | 23, 843 | 0, 844 | "STRING" 845 | ], 846 | [ 847 | 35, 848 | 5, 849 | 0, 850 | 9, 851 | 2, 852 | "TRANSCRIPTION" 853 | ] 854 | ], 855 | "groups": [], 856 | "config": {}, 857 | "extra": {}, 858 | "version": 0.4 859 | } -------------------------------------------------------------------------------- /font_files/AURORA-PRO.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/AURORA-PRO.otf -------------------------------------------------------------------------------- /font_files/Akira Expanded Demo.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/Akira Expanded Demo.otf -------------------------------------------------------------------------------- /font_files/Another Danger - Demo.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/Another Danger - Demo.otf -------------------------------------------------------------------------------- /font_files/Doctor Glitch.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/Doctor Glitch.otf -------------------------------------------------------------------------------- /font_files/Ghastly Panic.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/Ghastly Panic.ttf -------------------------------------------------------------------------------- /font_files/MetalGothic-Regular.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/MetalGothic-Regular.ttf -------------------------------------------------------------------------------- /font_files/Montserrat-Black.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/Montserrat-Black.ttf -------------------------------------------------------------------------------- /font_files/The Constellation.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/The Constellation.ttf -------------------------------------------------------------------------------- /font_files/The-Augusta.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/The-Augusta.otf -------------------------------------------------------------------------------- /font_files/Vogue.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/Vogue.ttf -------------------------------------------------------------------------------- /font_files/Wreckside.otf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ForeignGods/ComfyUI-Mana-Nodes/47ea4cedc91f0fa87c4b0cbce16faba4efdbf18c/font_files/Wreckside.otf -------------------------------------------------------------------------------- /helpers/logger.py: -------------------------------------------------------------------------------- 1 | import copy 2 | import logging 3 | 4 | class ColoredFormatter(logging.Formatter): 5 | COLORS = { 6 | "DEBUG": "\033[0;36m", # CYAN 7 | "INFO": "\033[0;32m", # GREEN 8 | "WARNING": "\033[0;33m", # YELLOW 9 | "ERROR": "\033[0;31m", # RED 10 | "CRITICAL": "\033[0;37;41m", # WHITE ON RED 11 | "RESET": "\033[0m", # RESET COLOR 12 | } 13 | 14 | def format(self, record): 15 | colored_record = copy.copy(record) 16 | levelname = colored_record.levelname 17 | seq = self.COLORS.get(levelname, self.COLORS["RESET"]) 18 | colored_record.levelname = f"{seq}{levelname}{self.COLORS['RESET']}" 19 | return super().format(colored_record) 20 | 21 | def logger(): 22 | def error(*args, **kwargs): 23 | pass 24 | 25 | return type("Logger", (), {"error": error})() 26 | 27 | # Create a new logger 28 | #logger = logging.getLogger("Mana") 29 | #logger.propagate = False 30 | 31 | # Add handler if we don't have one. 32 | #if not logger.handlers: 33 | # handler = logging.StreamHandler(sys.stdout) 34 | # handler.setFormatter(ColoredFormatter("[%(name)s] - %(levelname)s - %(message)s")) 35 | # logger.addHandler(handler) 36 | 37 | # Configure logger 38 | #loglevel = logging.INFO 39 | #logger.setLevel(loglevel) 40 | -------------------------------------------------------------------------------- /helpers/utils.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import torch 3 | import numpy as np 4 | import subprocess 5 | from PIL import Image 6 | from torch.nn.functional import pad 7 | 8 | # Tensor to PIL 9 | def tensor2pil(image): 10 | return Image.fromarray(np.clip(255.0 * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 11 | 12 | 13 | # Convert PIL to Tensor 14 | def pil2tensor(image): 15 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 16 | 17 | def ensure_opencv(): 18 | if "python_embeded" in sys.executable or "python_embedded" in sys.executable: 19 | pip_install = [sys.executable, "-s", "-m", "pip", "install"] 20 | else: 21 | pip_install = [sys.executable, "-m", "pip", "install"] 22 | 23 | try: 24 | import cv2 25 | except Exception as e: 26 | try: 27 | subprocess.check_call(pip_install + ['opencv-python']) 28 | except: 29 | print('failed import cv2') 30 | 31 | def stack_audio_tensors(tensors, mode="pad"): 32 | # assert all(len(x.shape) == 2 for x in tensors) 33 | sizes = [x.shape[-1] for x in tensors] 34 | 35 | if mode in {"pad_l", "pad_r", "pad"}: 36 | # pad input tensors to be equal length 37 | dst_size = max(sizes) 38 | stack_tensors = ( 39 | [pad(x, pad=(0, dst_size - x.shape[-1])) for x in tensors] 40 | if mode == "pad_r" 41 | else [pad(x, pad=(dst_size - x.shape[-1], 0)) for x in tensors] 42 | ) 43 | elif mode in {"trunc_l", "trunc_r", "trunc"}: 44 | # truncate input tensors to be equal length 45 | dst_size = min(sizes) 46 | stack_tensors = ( 47 | [x[:, x.shape[-1] - dst_size:] for x in tensors] 48 | if mode == "trunc_r" 49 | else [x[:, :dst_size] for x in tensors] 50 | ) 51 | else: 52 | assert False, 'unknown mode "{pad}"' 53 | 54 | return torch.stack(stack_tensors) -------------------------------------------------------------------------------- /nodes/audio2video_node.py: -------------------------------------------------------------------------------- 1 | from torchvision.transforms.functional import to_tensor 2 | import torch 3 | import numpy as np 4 | from moviepy.editor import AudioFileClip,ImageSequenceClip 5 | import os 6 | from ..helpers.utils import tensor2pil 7 | from pathlib import Path 8 | import folder_paths 9 | 10 | class audio2video: 11 | 12 | def __init__(self): 13 | pass 14 | 15 | @classmethod 16 | def INPUT_TYPES(cls): 17 | return { 18 | "required": { 19 | "images": ("IMAGE", {"display": "text", "forceInput": True}), 20 | "filename_prefix": ("STRING", {"default": "video\\video"}) 21 | }, 22 | "optional": { 23 | "audio_file": ("STRING", {"display": "text", "forceInput": True}), 24 | "fps": ("INT", {"default": 30, "min": 1, "max": 60, "step": 1}) 25 | }, 26 | "hidden": { 27 | "prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO" 28 | } 29 | 30 | } 31 | 32 | CATEGORY = "💠 Mana Nodes" 33 | RETURN_TYPES = ("STRING",) 34 | RETURN_NAMES = ("video_file",) 35 | FUNCTION = "run" 36 | OUTPUT_NODE = True 37 | 38 | def run(self, **kwargs): 39 | fps = kwargs.get('fps', 30) 40 | 41 | # Prepare frames 42 | pil_frames = self.prepare_frames(kwargs['images']) 43 | 44 | # Construct video path 45 | full_path = self.construct_video_path(kwargs) 46 | 47 | # Create and save the video clip 48 | video_clip = self.create_video_clip(pil_frames, fps, kwargs.get('audio_file',None), full_path) 49 | 50 | # Generate preview metadata 51 | preview = self.generate_preview_metadata(full_path) 52 | 53 | #return {"ui": {"videos": preview}} 54 | return {"ui": {"videos": preview}, "result": ((True, full_path),)} 55 | 56 | 57 | def prepare_frames(self, images): 58 | pil_frames = [] 59 | for frame in images: 60 | if not isinstance(frame, torch.Tensor): 61 | frame = to_tensor(frame) 62 | frame.permute(2, 1, 0) 63 | pil_image = tensor2pil(frame) 64 | pil_frames.append(pil_image) 65 | return pil_frames 66 | 67 | def construct_video_path(self, kwargs): 68 | base_directory = folder_paths.get_output_directory() 69 | filename_prefix = os.path.normpath(kwargs['filename_prefix']) 70 | full_path = os.path.join(base_directory, filename_prefix) 71 | 72 | # Ensure the path ends with .mp4 73 | if not full_path.endswith('.mp4'): 74 | full_path += '.mp4' 75 | 76 | # Increment filename if it already exists 77 | counter = 1 78 | while os.path.exists(full_path): 79 | # Construct a new path with an incremented number 80 | new_filename = f"{filename_prefix}_{counter}.mp4" 81 | full_path = os.path.join(base_directory, new_filename) 82 | counter += 1 83 | 84 | # Create the directory if it does not exist 85 | Path(os.path.dirname(full_path)).mkdir(parents=True, exist_ok=True) 86 | 87 | return full_path 88 | 89 | # def create_video_clip(self, frames, fps, audio_file_path, output_path): 90 | # numpy_frames = [np.array(frame) for frame in frames] 91 | # video_clip = ImageSequenceClip(numpy_frames, fps=fps) 92 | # video_clip = video_clip.set_audio(AudioFileClip(audio_file_path)) 93 | # video_clip.write_videofile(output_path, codec="libx264", fps=fps) 94 | # return video_clip 95 | 96 | def create_video_clip(self, pil_frames, fps, audio_file, full_path): 97 | numpy_frames = [np.array(frame) for frame in pil_frames] 98 | video_clip = ImageSequenceClip(numpy_frames, fps=fps) 99 | 100 | # Add audio to the video clip if audio_file is not None 101 | if audio_file is not None: 102 | audio_clip = AudioFileClip(audio_file) 103 | video_clip = video_clip.set_audio(audio_clip) 104 | 105 | # Write the video clip to a file 106 | video_clip.write_videofile(full_path, codec='libx264') 107 | 108 | return video_clip 109 | 110 | def generate_preview_metadata(self, full_path): 111 | filename = os.path.basename(full_path) 112 | parent_folder = os.path.dirname(full_path) 113 | 114 | # Check if the parent folder is the output folder 115 | if os.path.basename(parent_folder) == 'output': 116 | subfolder = '' # No subfolder 117 | else: 118 | subfolder = os.path.basename(parent_folder) # Subfolder exists 119 | 120 | return [ 121 | { 122 | "filename": filename, 123 | "subfolder": subfolder, 124 | "type": "output", 125 | "format": "video/mp4", 126 | } 127 | ] 128 | 129 | #@classmethod 130 | #def IS_CHANGED(cls, video, *args, **kwargs): 131 | # video_path = folder_paths.get_annotated_filepath(video) 132 | # m = hashlib.sha256() 133 | # with open(video_path, "rb") as f: 134 | # m.update(f.read()) 135 | # return m.digest().hex() 136 | 137 | #@classmethod 138 | #def VALIDATE_INPUTS(cls, video, *args, **kwargs): 139 | # if not folder_paths.exists_annotated_filepath(video): 140 | # return "Invalid video file: {}".format(video) 141 | # return True 142 | -------------------------------------------------------------------------------- /nodes/canvas_settings_node.py: -------------------------------------------------------------------------------- 1 | class canvas_settings: 2 | 3 | def __init__(self): 4 | pass 5 | 6 | @classmethod 7 | def INPUT_TYPES(cls): 8 | alignment_options = ["left top", "left center", "left bottom", 9 | "center top", "center center", "center bottom", 10 | "right top", "right center", "right bottom"] 11 | return { 12 | "required": { 13 | "height": ("INT", {"default": 512, "step": 1, "display": "number"}), 14 | "width": ("INT", {"default": 512, "step": 1, "display": "number"}), 15 | "background_color": ("STRING", {"default": "black", "display": "text"}), 16 | "text_alignment": (alignment_options, {"default": "center center", "display": "dropdown"}), 17 | "padding": ("INT", {"default": 0, "min": 0, "step": 1, "display": "number"}), 18 | "line_spacing": ("INT", {"default": 5, "step": 1, "display": "number"}), 19 | }, 20 | "optional": { 21 | "images": ("IMAGE", {"default": None}), 22 | }, 23 | } 24 | 25 | CATEGORY = "💠 Mana Nodes/⚙️ Generator Settings" 26 | RETURN_TYPES = ("CANVAS_SETTINGS",) 27 | RETURN_NAMES = ("canvas",) 28 | FUNCTION = "run" 29 | 30 | def run(self, **kwargs): 31 | 32 | settings = { 33 | 'height': kwargs.get('height'), 34 | 'width': kwargs.get('width'), 35 | 'background_color': kwargs.get('background_color'), 36 | 'text_alignment': kwargs.get('text_alignment'), 37 | 'padding': kwargs.get('padding'), 38 | 'images': kwargs.get('images'), 39 | 'line_spacing': kwargs.get('line_spacing'), 40 | } 41 | 42 | return (settings,) 43 | 44 | -------------------------------------------------------------------------------- /nodes/color_animations_node.py: -------------------------------------------------------------------------------- 1 | class color_animations: 2 | 3 | def __init__(self): 4 | pass 5 | 6 | @classmethod 7 | def INPUT_TYPES(cls): 8 | animation_reset = ["word", "line", "never","looped","pingpong"] 9 | color_animations = ["rainbow", "sunset", "grey", "ocean", "forest", "fire", "sky", "earth"] 10 | return { 11 | "required": { 12 | "color_preset": (color_animations, {"default": "rainbow", "display": "dropdown"}), 13 | "animation_duration": ("INT", {"default": 30, "step": 1, "display": "number"}), 14 | "animation_reset": (animation_reset, {"default": "word", "display": "dropdown"}), 15 | 16 | } 17 | } 18 | 19 | CATEGORY = "💠 Mana Nodes/📅 Value Scheduling" 20 | RETURN_TYPES = ("STRING",) 21 | RETURN_NAMES = ("scheduled_colors",) 22 | FUNCTION = "run" 23 | 24 | def run(self, **kwargs): 25 | color_animations = kwargs.get('color_preset') 26 | animation_reset = kwargs.get('animation_reset') 27 | animation_duration = kwargs.get('animation_duration') 28 | 29 | scheduled_values = self.generate_color_schedule(color_animations, animation_duration, animation_reset) 30 | return (scheduled_values,) 31 | 32 | def generate_color_schedule(self, color_type, duration, animation_reset): 33 | # Define colors for each type 34 | color_definitions = { 35 | "rainbow": [(255, 0, 0), (255, 165, 0), (255, 255, 0), (0, 255, 0), (0, 0, 255), (75, 0, 130), (238, 130, 238)], 36 | "sunset": [(255, 76, 0), (255, 108, 0), (255, 139, 0), (255, 171, 0), (255, 203, 0)], 37 | "grey": [(50, 50, 50), (100, 100, 100), (150, 150, 150), (200, 200, 200), (250, 250, 250)], 38 | "ocean": [(0, 0, 255), (0, 0, 200), (0, 0, 150), (0, 0, 100), (0, 0, 50)], 39 | "forest": [(0, 100, 0), (34, 139, 34), (46, 139, 87), (60, 179, 113), (85, 107, 47)], 40 | "fire": [(255, 0, 0), (255, 69, 0), (255, 99, 71), (255, 140, 0), (255, 165, 0)], 41 | "sky": [(135, 206, 235), (135, 206, 250), (70, 130, 180), (100, 149, 237), (240, 248, 255)], 42 | "earth": [(101, 67, 33), (139, 69, 19), (160, 82, 45), (165, 42, 42), (210, 105, 30)] 43 | # TODO: add more color presets 44 | } 45 | 46 | colors = color_definitions.get(color_type, [(0, 0, 0)]) # Default to black if type is unknown 47 | color_schedule = [] 48 | 49 | for i in range(duration): 50 | # Interpolate color 51 | t = i / duration 52 | index = int(t * (len(colors) - 1)) 53 | next_index = min(index + 1, len(colors) - 1) 54 | fraction = (t * (len(colors) - 1)) - index 55 | color = tuple(int((1 - fraction) * colors[index][j] + fraction * colors[next_index][j]) for j in range(3)) 56 | 57 | color_schedule.append({'x': i + 1, 'y': color}) 58 | 59 | return (color_schedule, animation_reset) -------------------------------------------------------------------------------- /nodes/font2img_node.py: -------------------------------------------------------------------------------- 1 | import os 2 | from PIL import Image, ImageDraw, ImageFont, ImageOps, ImageColor 3 | import PIL 4 | import numpy as np 5 | import torch 6 | from torchvision import transforms 7 | from matplotlib import font_manager 8 | import re 9 | 10 | class font2img: 11 | 12 | FONTS = {} 13 | FONT_NAMES = [] 14 | 15 | def __init__(self): 16 | pass 17 | 18 | @classmethod 19 | def system_font_names(self): 20 | mgr = font_manager.FontManager() 21 | return {font.name: font.fname for font in mgr.ttflist} 22 | 23 | @classmethod 24 | def get_font_files(self, font_dir): 25 | extensions = ['.ttf', '.otf', '.woff', '.woff2'] 26 | return [os.path.join(font_dir, f) for f in os.listdir(font_dir) 27 | if os.path.isfile(os.path.join(font_dir, f)) and f.endswith(tuple(extensions))] 28 | 29 | @classmethod 30 | def setup_font_directories(self): 31 | script_dir = os.path.dirname(os.path.dirname(__file__)) 32 | custom_font_files = [] 33 | for dir_name in ['font', 'font_files']: 34 | font_dir = os.path.join(script_dir, dir_name) 35 | if os.path.exists(font_dir): 36 | custom_font_files.extend(self.get_font_files(font_dir)) 37 | return custom_font_files 38 | 39 | @classmethod 40 | def combined_font_list(self): 41 | system_fonts = self.system_font_names() 42 | custom_font_files = self.setup_font_directories() 43 | 44 | # Create a dictionary for custom fonts mapping font file base names to their paths 45 | custom_fonts = {os.path.splitext(os.path.basename(f))[0]: f for f in custom_font_files} 46 | 47 | # Merge system_fonts and custom_fonts dictionaries 48 | all_fonts = {**system_fonts, **custom_fonts} 49 | return all_fonts 50 | 51 | def get_font(self, font_name, font_size) -> ImageFont.FreeTypeFont: 52 | font_file = self.FONTS[font_name] 53 | return ImageFont.truetype(font_file, font_size) 54 | 55 | @classmethod 56 | def INPUT_TYPES(self): 57 | 58 | self.FONTS = self.combined_font_list() 59 | self.FONT_NAMES = sorted(self.FONTS.keys()) 60 | return { 61 | "required": { 62 | "font": ("TEXT_GRAPHIC_ELEMENT", {"default": None,"forceInput": True}), 63 | "text": ("STRING", {"multiline": True, "placeholder": "\"1\": \"Hello\",\n\"10\": \"Hello World\""}), 64 | "canvas": ("CANVAS_SETTINGS", {"default": None,"forceInput": True}), 65 | "frame_count": ("INT", {"default": 1, "min": 1, "step": 1, "display": "number"}), 66 | }, 67 | "optional": { 68 | "transcription": ("TRANSCRIPTION", {"default": None,"forceInput": True}), 69 | "highlight_font": ("TEXT_GRAPHIC_ELEMENT", {"default": None,"forceInput": True}), 70 | "skip_first_frames": ("INT", {"default": 0, "min": 0, "step": 1, "display": "number"}), 71 | } 72 | } 73 | 74 | RETURN_TYPES = ("IMAGE", "STRING",) 75 | RETURN_NAMES = ("images", "framestamps_string",) 76 | FUNCTION = "run" 77 | CATEGORY = "💠 Mana Nodes" 78 | 79 | def run(self, **kwargs): 80 | frame_count = kwargs['frame_count'] 81 | images = kwargs.get('canvas', {}).get('images', [None] * frame_count) 82 | transcription = kwargs.get('transcription', None) 83 | text = kwargs.get('text') 84 | 85 | if transcription != None: 86 | formatted_transcription = self.format_transcription(kwargs) 87 | text = formatted_transcription 88 | else: 89 | formatted_transcription = text 90 | 91 | frame_text_dict, is_structured_input = self.parse_text_input(text, kwargs) 92 | frame_text_dict = self.cumulative_text(frame_text_dict, frame_count) 93 | 94 | images = self.generate_images(frame_text_dict,images , kwargs) 95 | image_batch = torch.cat(images, dim=0) 96 | 97 | return (image_batch, formatted_transcription,) 98 | 99 | def format_transcription(self, kwargs): 100 | if not kwargs['transcription']: 101 | return "" 102 | 103 | highlight_font = kwargs.get('highlight_font', None) 104 | transcription_fps = kwargs['transcription']['fps'] 105 | transcription_mode = kwargs['transcription']['transcription_mode'] 106 | transcription_data = kwargs['transcription']['transcription_data'] 107 | image_width = kwargs['canvas']['width'] 108 | padding = kwargs['canvas']['padding'] 109 | formatted_transcription = "" 110 | current_sentence = "" 111 | sentence_words = [] 112 | sentence_frame_numbers = [] 113 | 114 | for i, (word, start_time, end_time) in enumerate(transcription_data): 115 | frame_number = 1 + round(start_time * transcription_fps) 116 | 117 | if not current_sentence: 118 | current_sentence = word 119 | sentence_words = [word] 120 | sentence_frame_numbers = [frame_number] 121 | else: 122 | new_sentence = current_sentence + " " + word 123 | width = self.get_text_width(new_sentence, kwargs) 124 | if width <= image_width - padding: 125 | current_sentence = new_sentence 126 | sentence_words.append(word) 127 | sentence_frame_numbers.append(frame_number) 128 | else: 129 | if transcription_mode == "line": 130 | # Format each word in the sentence with tags and output with the corresponding frame number 131 | for j, sentence_word in enumerate(sentence_words): 132 | if highlight_font is not None: 133 | sentence = ' '.join(["{}".format(w) if j == k else w for k, w in enumerate(sentence_words)]) 134 | else: 135 | sentence = ' '.join(sentence_words) 136 | formatted_transcription += f'"{sentence_frame_numbers[j]}": "{sentence}",\n' 137 | current_sentence = word 138 | sentence_words = [word] 139 | sentence_frame_numbers = [frame_number] 140 | else: 141 | current_sentence = word 142 | 143 | if transcription_mode == "fill": 144 | words = current_sentence.split() 145 | # Add tags around the last word in the sentence only if highlight_font is not None 146 | if words and highlight_font is not None: 147 | words[-1] = f"{words[-1]}" 148 | tagged_sentence = ' '.join(words) 149 | formatted_transcription += f'"{frame_number}": "{tagged_sentence}",\n' 150 | 151 | if transcription_mode == "word": 152 | formatted_transcription += f'"{frame_number}": "{word}",\n' 153 | 154 | # Handle the last sentence for 'line' and 'fill' modes 155 | if current_sentence: 156 | if transcription_mode == "line": 157 | for j, sentence_word in enumerate(sentence_words): 158 | if highlight_font is not None: 159 | tagged_sentence = ' '.join(["{}".format(w) if j == k else w for k, w in enumerate(sentence_words)]) 160 | else: 161 | tagged_sentence = ' '.join(sentence_words) 162 | formatted_transcription += f'"{sentence_frame_numbers[j]}": "{tagged_sentence}",\n' 163 | 164 | return formatted_transcription 165 | 166 | def parse_animation_duration(self, anim_list): 167 | """Parse animatable property and return its duration, which is the highest frame number defined.""" 168 | # Expecting a single string in list 169 | if isinstance(anim_list, list): 170 | # Find the highest 'x' value in the list, which represents the animation duration 171 | max_frame = max(item['x'] for item in anim_list) 172 | return max_frame 173 | else: 174 | return 1 # Default duration is 1 if the list is empty 175 | 176 | def calculate_pingpong_position(self, current_frame, duration): 177 | if duration <= 1: 178 | return 0 179 | cycle_length = duration * 2 - 2 180 | position = current_frame % cycle_length 181 | if position >= duration: 182 | return cycle_length - position 183 | return position 184 | 185 | def calculate_sequence_frame(self, current_frame, start_frame, duration, reset_mode): 186 | if reset_mode == 'word' or reset_mode == 'line': 187 | active = (current_frame - start_frame) < duration 188 | return (current_frame - start_frame) + 1 if active else duration 189 | elif reset_mode == 'never': 190 | return current_frame + 1 if current_frame <= duration else duration 191 | elif reset_mode == 'looped': 192 | return (current_frame % duration) + 1 193 | elif reset_mode == 'pingpong': 194 | return self.calculate_pingpong_position(current_frame, duration) 195 | return 1 196 | 197 | # Helper functions 198 | def animation_reset(self, animation_reset_mode, new_text, old_text, transcription_mode): 199 | if animation_reset_mode == 'word': 200 | return new_text.split() != old_text.split() 201 | elif animation_reset_mode == 'line': 202 | new_text = self.remove_tags(new_text) 203 | old_text = self.remove_tags(old_text) 204 | if transcription_mode == 'line': 205 | return new_text != old_text 206 | if transcription_mode == 'fill': 207 | return len(new_text.split()) < len(old_text.split()) 208 | return False 209 | 210 | def remove_tags(self, text): 211 | # Regex to find and 212 | cleaned_text = re.sub(r"", "", text) 213 | return cleaned_text 214 | 215 | def generate_images(self, frame_text_dict, input_images, kwargs): 216 | images = [] 217 | 218 | # background images or color 219 | prepared_images = self.prepare_image(input_images, kwargs) 220 | 221 | transcription = kwargs.get('transcription', None) 222 | if transcription != None: 223 | transcription_mode = transcription['transcription_mode'] 224 | else: 225 | transcription_mode = None 226 | 227 | main_font = kwargs.get('font', None) 228 | if main_font != None: 229 | main_font_file = main_font['font_file'] 230 | 231 | rotation = kwargs['font']['rotation'][0] 232 | y_offset = kwargs['font']['y_offset'][0] 233 | x_offset = kwargs['font']['x_offset'][0] 234 | font_size = kwargs['font']['font_size'][0] 235 | 236 | font_color = kwargs['font']['font_color'][0] 237 | border_color = kwargs['font']['border_color'][0] 238 | shadow_color = kwargs['font']['shadow_color'][0] 239 | 240 | animation_reset_rotation = kwargs['font']['rotation'][1] 241 | animation_reset_y_offset = kwargs['font']['y_offset'][1] 242 | animation_reset_x_offset = kwargs['font']['x_offset'][1] 243 | animation_reset_font_size = kwargs['font']['font_size'][1] 244 | 245 | animation_reset_font_color = kwargs['font']['font_color'][1] 246 | animation_reset_border_color = kwargs['font']['border_color'][1] 247 | animation_reset_shadow_color = kwargs['font']['shadow_color'][1] 248 | 249 | rotation_duration = self.parse_animation_duration(rotation) 250 | y_offset_duration = self.parse_animation_duration(y_offset) 251 | x_offset_duration = self.parse_animation_duration(x_offset) 252 | font_size_duration = self.parse_animation_duration(font_size) 253 | font_color_duration = self.parse_animation_duration(font_color) 254 | shadow_color_duration = self.parse_animation_duration(shadow_color) 255 | border_color_duration = self.parse_animation_duration(border_color) 256 | 257 | highlight_font = kwargs.get('highlight_font', None) 258 | if highlight_font != None: 259 | tagged_font_file = highlight_font['font_file'] 260 | 261 | tagged_font_size = highlight_font['font_size'][0] 262 | tagged_font_color = highlight_font['font_color'][0] 263 | tagged_border_color = highlight_font['border_color'][0] 264 | tagged_shadow_color = highlight_font['shadow_color'][0] 265 | 266 | animation_reset_tagged_font_size = highlight_font['font_size'][1] 267 | animation_reset_tagged_font_color = highlight_font['font_color'][1] 268 | animation_reset_tagged_border_color = highlight_font['border_color'][1] 269 | animation_reset_tagged_shadow_color = highlight_font['shadow_color'][1] 270 | 271 | tagged_font_size_duration = self.parse_animation_duration(tagged_font_size) 272 | tagged_font_color_duration = self.parse_animation_duration(tagged_font_color) 273 | tagged_border_color_duration = self.parse_animation_duration(tagged_border_color) 274 | tagged_shadow_color_duration = self.parse_animation_duration(tagged_shadow_color) 275 | 276 | frame_count = kwargs['frame_count'] 277 | removed_tags_last_text= '' 278 | last_text = "" 279 | animation_started_frame_rotation = 1 280 | animation_started_frame_y_offset = 1 281 | animation_started_frame_x_offset = 1 282 | animation_started_frame_font_size = 1 283 | 284 | animation_started_frame_font_color = 1 285 | animation_started_frame_border_color = 1 286 | animation_started_frame_shadow_color = 1 287 | 288 | animation_started_frame_tagged_font_size = 1 289 | animation_started_frame_tagged_font_color = 1 290 | animation_started_frame_tagged_border_color = 1 291 | animation_started_frame_tagged_shadow_color = 1 292 | 293 | first_pass = True 294 | 295 | # Ensure prepared_images is a list 296 | if not isinstance(prepared_images, list): 297 | prepared_images = [prepared_images] 298 | 299 | for i in range(1, frame_count + 1): 300 | text = frame_text_dict.get(str(i), "") 301 | removed_tags_text = self.remove_tags(text) 302 | 303 | if self.animation_reset(animation_reset_rotation, removed_tags_text, removed_tags_last_text, transcription_mode) or first_pass == True: 304 | animation_started_frame_rotation = i 305 | if self.animation_reset(animation_reset_y_offset, removed_tags_text, removed_tags_last_text, transcription_mode) or first_pass == True: 306 | animation_started_frame_y_offset = i 307 | if self.animation_reset(animation_reset_x_offset, removed_tags_text, removed_tags_last_text, transcription_mode) or first_pass == True: 308 | animation_started_frame_x_offset = i 309 | if self.animation_reset(animation_reset_font_size, removed_tags_text, removed_tags_last_text, transcription_mode) or first_pass == True: 310 | animation_started_frame_font_size = i 311 | if self.animation_reset(animation_reset_font_color, removed_tags_text, removed_tags_last_text, transcription_mode) or first_pass == True: 312 | animation_started_frame_font_color = i 313 | if self.animation_reset(animation_reset_border_color, removed_tags_text, removed_tags_last_text, transcription_mode) or first_pass == True: 314 | animation_started_frame_border_color = i 315 | if self.animation_reset(animation_reset_shadow_color, removed_tags_text, removed_tags_last_text, transcription_mode) or first_pass == True: 316 | animation_started_frame_shadow_color = i 317 | 318 | if highlight_font is not None: 319 | if self.animation_reset(animation_reset_tagged_font_size, text, last_text, transcription_mode) or first_pass == True: 320 | animation_started_frame_tagged_font_size = i 321 | if self.animation_reset(animation_reset_tagged_font_color, text, last_text, transcription_mode) or first_pass == True: 322 | animation_started_frame_tagged_font_color = i 323 | if self.animation_reset(animation_reset_tagged_border_color, text, last_text, transcription_mode) or first_pass == True: 324 | animation_started_frame_tagged_border_color = i 325 | if self.animation_reset(animation_reset_tagged_shadow_color, text, last_text, transcription_mode) or first_pass == True: 326 | animation_started_frame_tagged_shadow_color = i 327 | 328 | first_pass = False 329 | last_text = text 330 | removed_tags_last_text = removed_tags_text 331 | 332 | # Calculate sequence frames for each property 333 | sequence_frame_rotation = self.calculate_sequence_frame(i, animation_started_frame_rotation, rotation_duration, animation_reset_rotation) 334 | sequence_frame_y_offset = self.calculate_sequence_frame(i, animation_started_frame_y_offset, y_offset_duration, animation_reset_y_offset) 335 | sequence_frame_x_offset = self.calculate_sequence_frame(i, animation_started_frame_x_offset, x_offset_duration, animation_reset_x_offset) 336 | sequence_frame_font_size = self.calculate_sequence_frame(i, animation_started_frame_font_size, font_size_duration, animation_reset_font_size) 337 | sequence_frame_font_color = self.calculate_sequence_frame(i, animation_started_frame_font_color, font_color_duration, animation_reset_font_color) 338 | sequence_frame_border_color = self.calculate_sequence_frame(i, animation_started_frame_border_color, border_color_duration, animation_reset_border_color) 339 | sequence_frame_shadow_color = self.calculate_sequence_frame(i, animation_started_frame_shadow_color, shadow_color_duration, animation_reset_shadow_color) 340 | 341 | if highlight_font is not None: 342 | sequence_frame_tagged_font_size = self.calculate_sequence_frame(i, animation_started_frame_tagged_font_size, tagged_font_size_duration, animation_reset_tagged_font_size) 343 | sequence_frame_tagged_font_color = self.calculate_sequence_frame(i, animation_started_frame_tagged_font_color, tagged_font_color_duration, animation_reset_tagged_font_color) 344 | sequence_frame_tagged_border_color = self.calculate_sequence_frame(i, animation_started_frame_tagged_border_color, tagged_border_color_duration, animation_reset_tagged_border_color) 345 | sequence_frame_tagged_shadow_color = self.calculate_sequence_frame(i, animation_started_frame_tagged_shadow_color, tagged_shadow_color_duration, animation_reset_tagged_shadow_color) 346 | 347 | current_rotation = self.get_frame_specific_value(sequence_frame_rotation, rotation) if isinstance(rotation, list) else rotation 348 | current_y_offset = self.get_frame_specific_value(sequence_frame_y_offset, y_offset) if isinstance(y_offset, list) else y_offset 349 | current_x_offset = self.get_frame_specific_value(sequence_frame_x_offset, x_offset) if isinstance(x_offset, list) else x_offset 350 | current_font_size = self.get_frame_specific_value(sequence_frame_font_size, font_size) if isinstance(font_size, list) else font_size 351 | 352 | font = self.get_font(main_font_file, current_font_size) 353 | 354 | current_font_color = self.get_frame_specific_value(sequence_frame_font_color, font_color) if isinstance(font_color, list) else font_color 355 | current_border_color = self.get_frame_specific_value(sequence_frame_border_color, border_color) if isinstance(border_color, list) else border_color 356 | current_shadow_color = self.get_frame_specific_value(sequence_frame_shadow_color, shadow_color) if isinstance(shadow_color, list) else shadow_color 357 | 358 | if highlight_font is not None: 359 | current_tagged_font_size = self.get_frame_specific_value(sequence_frame_tagged_font_size, tagged_font_size) if isinstance(tagged_font_size, list) else tagged_font_size 360 | tagged_font = self.get_font(tagged_font_file, current_tagged_font_size) 361 | 362 | current_tagged_font_color = self.get_frame_specific_value(sequence_frame_tagged_font_color, tagged_font_color) if isinstance(tagged_font_color, list) else tagged_font_color 363 | current_tagged_border_color = self.get_frame_specific_value(sequence_frame_tagged_border_color, tagged_border_color) if isinstance(tagged_border_color, list) else tagged_border_color 364 | current_tagged_shadow_color = self.get_frame_specific_value(sequence_frame_tagged_shadow_color, tagged_shadow_color) if isinstance(tagged_shadow_color, list) else tagged_shadow_color 365 | else: 366 | tagged_font = font 367 | 368 | current_tagged_font_color = current_font_color 369 | current_tagged_border_color = current_border_color 370 | current_tagged_shadow_color = current_shadow_color 371 | 372 | image_index = min(i - 1, len(prepared_images) - 1) 373 | selected_image = prepared_images[image_index] 374 | 375 | draw = ImageDraw.Draw(selected_image) 376 | text_block_width, text_block_height = self.calculate_text_block_size(draw, text, font, tagged_font, kwargs) 377 | text_position = self.calculate_text_position(text_block_width, text_block_height, current_x_offset, current_y_offset, kwargs) 378 | processed_image = self.process_single_image(selected_image, 379 | text, 380 | font, 381 | current_rotation, 382 | current_x_offset, 383 | current_y_offset, 384 | text_position, 385 | tagged_font, 386 | current_font_color, 387 | current_border_color, 388 | current_shadow_color, 389 | current_tagged_font_color, 390 | current_tagged_border_color, 391 | current_tagged_shadow_color, 392 | kwargs) 393 | images.append(processed_image) 394 | return images 395 | 396 | def get_frame_specific_value(self, sequence_frame_number, value_list): 397 | if isinstance(value_list, list) and value_list: 398 | value_dict = {item['x']: item['y'] for item in value_list} 399 | current_value = value_dict.get(sequence_frame_number) 400 | 401 | if current_value is not None: 402 | return current_value 403 | else: 404 | # Find the last defined sequence value before the current sequence frame 405 | last_defined_frame = max((x for x in value_dict.keys() if x <= sequence_frame_number), default=1) 406 | return value_dict.get(last_defined_frame, value_list[0]['y']) 407 | else: 408 | return value_list 409 | 410 | def separate_text(self, text): 411 | tag_start = "" 412 | tag_end = "" 413 | tagged_parts = [] 414 | non_tagged_parts = [] 415 | while text: 416 | start_index = text.find(tag_start) 417 | end_index = text.find(tag_end) 418 | if start_index != -1 and end_index != -1: 419 | non_tagged_parts.append(text[:start_index]) 420 | tagged_parts.append(text[start_index + len(tag_start):end_index]) 421 | text = text[end_index + len(tag_end):] 422 | else: 423 | non_tagged_parts.append(text) 424 | break 425 | return ' '.join(non_tagged_parts), ' '.join(tagged_parts) 426 | 427 | def process_single_image(self, image, text, font, rotation_angle, x_offset, y_offset, text_position, tagged_font, font_color, border_color, shadow_color, tagged_font_color, tagged_border_color, tagged_shadow_color, kwargs ): 428 | rotation_anchor_x = kwargs['font']['rotation_anchor_x'][0] 429 | rotation_anchor_y = kwargs['font']['rotation_anchor_y'][0] 430 | border_width = kwargs['font']['border_width'][0] 431 | shadow_offset_x = kwargs['font']['shadow_offset_x'][0] 432 | 433 | # Create a larger canvas with the prepared image as the background 434 | orig_width, orig_height = image.size 435 | canvas_size = int(max(orig_width, orig_height) * 1.5) 436 | canvas = Image.new('RGBA', (canvas_size, canvas_size), (0, 0, 0, 0)) 437 | 438 | # Calculate text size and position 439 | draw = ImageDraw.Draw(canvas) 440 | text_block_width, text_block_height = self.calculate_text_block_size(draw, text, font, tagged_font, kwargs) 441 | text_x, text_y = text_position 442 | text_x += (canvas_size - orig_width) / 2 + x_offset 443 | text_y += (canvas_size - orig_height) / 2 + y_offset 444 | 445 | # Calculate the center of the text block 446 | text_center_x = text_x + text_block_width / 2 447 | text_center_y = text_y + text_block_height / 2 448 | 449 | total_kerning_width = sum(font.getlength(char) + kwargs['font']['kerning'][0] for char in text) - kwargs['font']['kerning'][0] * len(text) 450 | 451 | overlay = Image.new('RGBA', (int(text_block_width + border_width * 2 + shadow_offset_x + total_kerning_width), int(text_block_height + border_width * 2 + shadow_offset_x)), (255, 255, 255, 0)) 452 | draw_overlay = ImageDraw.Draw(overlay) 453 | 454 | # Draw text on overlays 455 | self.draw_text_on_overlay(draw_overlay, text, font, tagged_font, font_color, border_color, shadow_color, tagged_font_color, tagged_border_color, tagged_shadow_color, kwargs) 456 | canvas.paste(overlay, (int(text_x), int(text_y)), overlay) 457 | anchor = (text_center_x + rotation_anchor_x, text_center_y + rotation_anchor_y) 458 | rotated_canvas = canvas.rotate(rotation_angle, center=anchor, expand=0) 459 | 460 | # Create a new canvas to fill the background of the rotated image 461 | new_canvas = Image.new('RGBA', rotated_canvas.size, (0, 0, 0, 0)) 462 | 463 | # Paste the input image as the background of the new canvas 464 | new_canvas.paste(image, (int((rotated_canvas.size[0] - orig_width) / 2), int((rotated_canvas.size[1] - orig_height) / 2))) 465 | 466 | # Paste the rotated image onto the new canvas, keeping the background color 467 | new_canvas.paste(rotated_canvas, (0, 0), rotated_canvas) 468 | 469 | # Crop the canvas back to the original image dimensions 470 | cropped_image = new_canvas.crop(((canvas_size - orig_width) / 2, (canvas_size - orig_height) / 2, (canvas_size + orig_width) / 2, (canvas_size + orig_height) / 2)) 471 | 472 | return self.process_image_for_output(cropped_image) 473 | 474 | def draw_text_on_overlay(self, draw_overlay, text, font, tagged_font, font_color, border_color, shadow_color, tagged_font_color, tagged_border_color, tagged_shadow_color, kwargs): 475 | highlight_font = kwargs.get('highlight_font', None) 476 | if highlight_font != None: 477 | tagged_border_width = highlight_font['border_width'][0] 478 | tagged_shadow_offset_x = highlight_font['shadow_offset_x'][0] 479 | tagged_shadow_offset_y = highlight_font['shadow_offset_y'][0] 480 | else: 481 | tagged_border_width = 1 482 | tagged_shadow_offset_x = 0 483 | tagged_shadow_offset_y = 0 484 | 485 | main_border_width = kwargs['font']['border_width'][0] 486 | main_border_color = border_color 487 | main_shadow_offset_x = kwargs['font']['shadow_offset_x'][0] 488 | main_shadow_offset_y = kwargs['font']['shadow_offset_y'][0] 489 | main_shadow_color = shadow_color 490 | 491 | main_font_color = font_color 492 | main_font_kerning = kwargs['font']['kerning'][0] 493 | line_spacing = kwargs['canvas']['line_spacing'] 494 | 495 | y_text_overlay = 0 496 | x_text_overlay = main_border_width 497 | 498 | tag_start = "" 499 | tag_end = "" 500 | 501 | is_inside_tag = False 502 | current_font = font 503 | 504 | for line in text.split('\n'): 505 | while line: 506 | if line.startswith(tag_start): 507 | line = line[len(tag_start):] 508 | is_inside_tag = True 509 | current_font = tagged_font 510 | continue 511 | 512 | if line.startswith(tag_end): 513 | line = line[len(tag_end):] 514 | is_inside_tag = False 515 | current_font = font 516 | continue 517 | 518 | char = line[0] 519 | line = line[1:] 520 | 521 | # Adjust vertical position for tagged text 522 | if is_inside_tag: 523 | ascent, descent = current_font.getmetrics() 524 | font_offset = (font.getmetrics()[0] - ascent) + (descent - current_font.getmetrics()[1]) 525 | else: 526 | font_offset = 0 527 | 528 | if is_inside_tag: 529 | border_width = tagged_border_width 530 | border_color = tagged_border_color 531 | shadow_offset_x = tagged_shadow_offset_x 532 | shadow_offset_y = tagged_shadow_offset_y 533 | shadow_color = tagged_shadow_color 534 | font_color = tagged_font_color 535 | 536 | else: 537 | border_width = main_border_width 538 | border_color = main_border_color 539 | shadow_offset_x = main_shadow_offset_x 540 | shadow_offset_y = main_shadow_offset_y 541 | shadow_color = main_shadow_color 542 | font_color = main_font_color 543 | 544 | # Draw the shadow, text, and borders 545 | shadow_x = x_text_overlay + shadow_offset_x 546 | shadow_y = y_text_overlay + shadow_offset_y + font_offset 547 | 548 | # Draw the shadow 549 | draw_overlay.text( 550 | (x_text_overlay + shadow_offset_x, y_text_overlay + shadow_offset_y + font_offset), 551 | char, font=current_font, fill=shadow_color 552 | ) 553 | 554 | draw_overlay.text((shadow_x, shadow_y), char, font=current_font, fill=shadow_color) 555 | 556 | # Draw the border/stroke 557 | for dx in range(-border_width, border_width + 1): 558 | for dy in range(-border_width, border_width + 1): 559 | if dx == 0 and dy == 0: 560 | continue # Skip the character itself 561 | draw_overlay.text( 562 | (x_text_overlay + dx, y_text_overlay + dy + font_offset), 563 | char, font=current_font, fill=border_color 564 | ) 565 | 566 | # Draw the character 567 | draw_overlay.text( 568 | (x_text_overlay, y_text_overlay + font_offset), 569 | char, font=current_font, fill=font_color 570 | ) 571 | 572 | char_width = draw_overlay.textlength(char, font=current_font) 573 | x_text_overlay += char_width + main_font_kerning 574 | 575 | # Reset x position and increase y for next line 576 | x_text_overlay = main_border_width 577 | y_text_overlay += current_font.getbbox('Agy')[3] + line_spacing 578 | 579 | # Consider adding padding for the right border 580 | draw_overlay.text((x_text_overlay, y_text_overlay), '', font=font, fill=border_color) 581 | 582 | def get_text_width(self, text, kwargs): 583 | 584 | main_font = kwargs['font'] 585 | main_font_size = main_font['font_size'][0] 586 | main_font_file = main_font['font_file'] 587 | 588 | if isinstance(main_font_size, (list)): 589 | main_font_size = max(d['y'] for d in main_font_size) 590 | else: 591 | main_font_size = main_font_size 592 | 593 | # Load the font 594 | font = self.get_font(main_font_file, main_font_size) 595 | 596 | # Measure the size of the text rendered in the loaded font 597 | text_width = font.getlength(text) 598 | return text_width 599 | 600 | def calculate_text_position(self, text_width, text_height, x_offset, y_offset, kwargs): 601 | text_alignment = kwargs['canvas']['text_alignment'] 602 | image_width = kwargs['canvas']['width'] 603 | image_height = kwargs['canvas']['height'] 604 | padding = kwargs['canvas']['padding'] 605 | 606 | # Adjust the base position based on text_alignment and margin 607 | if text_alignment == "left top": 608 | base_x, base_y = padding, padding 609 | elif text_alignment == "left center": 610 | base_x, base_y = padding, padding + (image_height - text_height) // 2 611 | elif text_alignment == "left bottom": 612 | base_x, base_y = padding, image_height - text_height - padding 613 | elif text_alignment == "center top": 614 | base_x, base_y = (image_width - text_width) // 2, padding 615 | elif text_alignment == "center center": 616 | base_x, base_y = (image_width - text_width) // 2, (image_height - text_height) // 2 617 | elif text_alignment == "center bottom": 618 | base_x, base_y = (image_width - text_width) // 2, image_height - text_height - padding 619 | elif text_alignment == "right top": 620 | base_x, base_y = image_width - text_width - padding, padding 621 | elif text_alignment == "right center": 622 | base_x, base_y = image_width - text_width - padding, (image_height - text_height) // 2 623 | elif text_alignment == "right bottom": 624 | base_x, base_y = image_width - text_width - padding, image_height - text_height - padding 625 | else: # Default to center center 626 | base_x, base_y = (image_width - text_width) // 2, (image_height - text_height) // 2 627 | 628 | # Apply offsets 629 | final_x = base_x + x_offset 630 | final_y = base_y + y_offset 631 | 632 | return final_x, final_y 633 | 634 | def process_image_for_output(self, image) -> torch.Tensor: 635 | i = ImageOps.exif_transpose(image) 636 | if i.mode == 'I': 637 | i = i.point(lambda i: i * (1 / 255)) 638 | image = i.convert("RGB") 639 | image_np = np.array(image).astype(np.float32) / 255.0 640 | return torch.from_numpy(image_np)[None,] 641 | 642 | def calculate_text_block_size(self, draw, text, font, tagged_font, kwargs): 643 | lines = text.split('\n') 644 | max_width = 0 645 | font_height = font.getbbox('Agy')[3] # Height of a single line 646 | tagged_font_height = tagged_font.getbbox('Agy')[3] 647 | line_spacing = kwargs['canvas']['line_spacing'] 648 | 649 | for line in lines: 650 | non_tagged_text, tagged_text = self.separate_text(line) 651 | line_width = draw.textlength(non_tagged_text, font=font) 652 | tagged_line_width = draw.textlength(tagged_text, font=tagged_font) 653 | 654 | total_line_width = line_width + tagged_line_width 655 | max_width = max(max_width, total_line_width) 656 | 657 | total_height = max(font_height, tagged_font_height) * len(lines) + line_spacing * (len(lines) - 1) 658 | return max_width, total_height 659 | 660 | def parse_text_input(self, text, kwargs): 661 | structured_format = False 662 | frame_text_dict = {} 663 | frame_count = kwargs['frame_count'] 664 | skip_first_frames = kwargs.get('skip_first_frames', 0) 665 | 666 | # Filter out empty lines 667 | lines = [line for line in text.split('\n') if line.strip()] 668 | 669 | # Check if the input is in the structured format 670 | if all(':' in line and line.split(':')[0].strip().replace('"', '').isdigit() for line in lines): 671 | structured_format = True 672 | for line in lines: 673 | parts = line.split(':', 1) 674 | if len(parts) == 2: 675 | frame_number = str(int(parts[0].strip().replace('"', ''))-skip_first_frames) 676 | text = parts[1].strip().replace('"', '').replace(',', '') 677 | frame_text_dict[frame_number] = text 678 | else: 679 | # If not in structured format, use the input text for all frames 680 | frame_text_dict = {str(i): text for i in range(1, frame_count + 1)} 681 | 682 | return frame_text_dict, structured_format 683 | 684 | def cumulative_text(self, frame_text_dict, frame_count): 685 | cumulative_text_dict = {} 686 | last_text = "" 687 | 688 | for i in range(1, frame_count + 1): 689 | if str(i) in frame_text_dict: 690 | last_text = frame_text_dict[str(i)] 691 | cumulative_text_dict[str(i)] = last_text 692 | 693 | return cumulative_text_dict 694 | 695 | # TODO: ugly method has to be refactored 696 | def prepare_image(self, input_image, kwargs): 697 | 698 | image_width = kwargs['canvas']['width'] 699 | image_height = kwargs['canvas']['height'] 700 | padding = kwargs['canvas']['padding'] 701 | background_color = kwargs['canvas']['background_color'] 702 | 703 | if not isinstance(input_image, list): 704 | if isinstance(input_image, torch.Tensor): 705 | if input_image.dtype == torch.float: 706 | input_image = (input_image * 255).byte() 707 | 708 | if input_image.ndim == 4: 709 | processed_images = [] 710 | for img in input_image: 711 | tensor_image = img.permute(2, 0, 1) 712 | transform = transforms.ToPILImage() 713 | 714 | try: 715 | pil_image = transform(tensor_image) 716 | except Exception as e: 717 | print("Error during conversion:", e) 718 | raise 719 | 720 | if float(PIL.__version__.split('.')[0]) < 10: 721 | processed_images.append(pil_image.resize((image_width, image_height), Image.ANTIALIAS)) 722 | else: 723 | processed_images.append(pil_image.resize((image_width, image_height), Image.LANCZOS)) 724 | return processed_images 725 | elif input_image.ndim == 3 and input_image.shape[0] in [3, 4]: 726 | tensor_image = input_image.permute(1, 2, 0) 727 | pil_image = transforms.ToPILImage()(tensor_image) 728 | 729 | if float(PIL.__version__.split('.')[0]) < 10: 730 | return pil_image.resize((image_width, image_height), Image.ANTIALIAS) 731 | else: 732 | return pil_image.resize((image_width, image_height), Image.LANCZOS) 733 | else: 734 | raise ValueError(f"Input image tensor has an invalid shape or number of channels: {input_image.shape}") 735 | elif input_image != None: 736 | return input_image.resize((image_width, image_height), Image.ANTIALIAS) 737 | else: 738 | background_color_tuple = ImageColor.getrgb(background_color) 739 | return Image.new('RGB', (image_width, image_height), color=background_color_tuple) 740 | else: 741 | background_color_tuple = ImageColor.getrgb(background_color) 742 | return Image.new('RGB', (image_width, image_height), color=background_color_tuple) 743 | -------------------------------------------------------------------------------- /nodes/scheduled_values_node.py: -------------------------------------------------------------------------------- 1 | class scheduled_values: 2 | 3 | def __init__(self): 4 | pass 5 | 6 | 7 | @classmethod 8 | def INPUT_TYPES(cls): 9 | animation_reset = ["word", "line", "never","looped","pingpong"] 10 | easing_types = [ 11 | "linear", 12 | "easeInQuad", 13 | "easeOutQuad", 14 | "easeInOutQuad", 15 | "easeInCubic", 16 | "easeOutCubic", 17 | "easeInOutCubic", 18 | "easeInQuart", 19 | "easeOutQuart", 20 | "easeInOutQuart", 21 | "easeInQuint", 22 | "easeOutQuint", 23 | "easeInOutQuint", 24 | "exponential" 25 | ] 26 | step_mode = ["single", "auto"] 27 | return { 28 | "required": { 29 | "frame_count": ("INT", {"default": 30, "step": 1, "display": "number"}), 30 | "value_range": ("INT", {"default": 15, "step": 1, "display": "number"}), 31 | "easing_type": (easing_types, {"default": "linear", "display": "dropdown"}), 32 | "step_mode": (step_mode, {"default": "single", "display": "dropdown"}), 33 | "animation_reset": (animation_reset, {"default": "word", "display": "dropdown"}), 34 | "id": ("INT", {"default": 0, "step": 1, "display": "number"}), 35 | "scheduled_values": ("STRING", {"default": "[]", "display": "text","readOnly": True }), 36 | }, 37 | "hidden": { 38 | "unique_id": "UNIQUE_ID", 39 | "extra_pnginfo": "EXTRA_PNGINFO" 40 | } 41 | } 42 | 43 | CATEGORY = "💠 Mana Nodes/📅 Value Scheduling" 44 | RETURN_TYPES = ("INT",) 45 | RETURN_NAMES = ("scheduled_values",) 46 | FUNCTION = "run" 47 | 48 | def run(self, **kwargs): 49 | scheduled_values = str(kwargs['scheduled_values']) 50 | animation_reset = kwargs.get('animation_reset') 51 | # this should be ok but maybe change it 52 | if scheduled_values == '[]': 53 | raise ValueError("scheduled_values is required and cannot be an empty list.") 54 | 55 | # this could also be more elegant 56 | scheduled_values = f"{scheduled_values}${animation_reset}" 57 | 58 | return (scheduled_values,) -------------------------------------------------------------------------------- /nodes/speech2text_node.py: -------------------------------------------------------------------------------- 1 | import librosa 2 | from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC 3 | import torch 4 | import requests 5 | import json 6 | 7 | class speech2text: 8 | 9 | def __init__(self): 10 | pass 11 | 12 | @classmethod 13 | def INPUT_TYPES(cls): 14 | spell_check_options = ["English", "Spanish", "French", 15 | "Portuguese", "German", "Italian", 16 | "Russian", "Arabic", "Basque", "Latvian", "Dutch"] 17 | transcription_mode = ["word","line","fill"] 18 | return { 19 | "required": { 20 | "audio_file": ("STRING", {"display": "text","forceInput": True}), 21 | "wav2vec2_model": (cls.get_wav2vec2_models(), {"display": "dropdown", "default": "jonatasgrosman/wav2vec2-large-xlsr-53-english"}), 22 | "spell_check_language": (spell_check_options, {"default": "English", "display": "dropdown"}), 23 | "framestamps_max_chars": ("INT", {"default": 25, "step": 1, "display": "number"}), 24 | "fps": ("INT", {"default": 30, "min": 1, "max": 60, "step": 1}), 25 | "transcription_mode": (transcription_mode, {"default": "fill", "display": "dropdown"}), 26 | "uppercase": ("BOOLEAN", {"default": True}) 27 | } 28 | } 29 | 30 | CATEGORY = "💠 Mana Nodes" 31 | RETURN_TYPES = ("TRANSCRIPTION", "STRING","STRING","STRING",) 32 | RETURN_NAMES = ("transcription", "raw_string","framestamps_string","timestamps_string",) 33 | FUNCTION = "run" 34 | OUTPUT_NODE = True 35 | 36 | def run(self, audio_file, wav2vec2_model, spell_check_language,framestamps_max_chars,**kwargs): 37 | fps = kwargs.get('fps',30) 38 | # Load and process with Wav2Vec2 model 39 | audio_array = self.audiofile_to_numpy(audio_file) 40 | raw_transcription = self.transcribe_with_timestamps(audio_array, wav2vec2_model) 41 | 42 | # Correct with spell checker 43 | corrected_transcription = self.correct_transcription_with_language_model(raw_transcription, spell_check_language) 44 | 45 | if kwargs['uppercase'] != True: 46 | # Assuming 'transcriptions' is a list of dictionaries 47 | corrected_transcription = [(word.lower(), start_time, end_time) for word, start_time, end_time in corrected_transcription] 48 | 49 | # Generate string formatted like JSON for transcription with timestamps 50 | frame_structure_transcription = self.transcription_to_frame_structure_string(corrected_transcription,fps,framestamps_max_chars) 51 | 52 | # Convert raw transcription to string format 53 | raw_transcription_string = self.transcription_to_string(corrected_transcription) 54 | 55 | # Convert raw transcription to string format 56 | json = self.transcription_to_json_string(corrected_transcription) 57 | 58 | settings_dict = { 59 | "transcription_data": corrected_transcription, # This is your list of tuples 60 | "fps": fps, # Assuming fps is a variable holding the fps value 61 | "transcription_mode": kwargs['transcription_mode'] # Assuming transcription_mode is a variable holding the mode as a string 62 | } 63 | 64 | return (settings_dict, raw_transcription_string,frame_structure_transcription,json,) 65 | 66 | def transcription_to_string(self, raw_transcription): 67 | # Convert a list of (word, start_time, end_time) tuples to a string 68 | return ' '.join([word for word, start_time, end_time in raw_transcription]) 69 | 70 | def transcription_to_frame_structure_string(self, corrected_transcription, fps, framestamps_max_chars): 71 | formatted_transcription = "" 72 | current_sentence = "" 73 | sentence_start_frame = -1 74 | 75 | for word, start_time, _ in corrected_transcription: 76 | frame_number = round(start_time * fps) 77 | 78 | if sentence_start_frame == -1: 79 | sentence_start_frame = frame_number # Initialize start frame for the first word 80 | 81 | if len(current_sentence + " " + word) <= framestamps_max_chars: 82 | if current_sentence: # Add space before word if sentence is not empty 83 | current_sentence += " " 84 | current_sentence += word 85 | formatted_transcription += f'"{frame_number}": "{current_sentence}",\n' 86 | else: 87 | # Start a new sentence when max_chars is reached 88 | current_sentence = word 89 | sentence_start_frame = frame_number 90 | formatted_transcription += f'"{frame_number}": "{current_sentence}",\n' 91 | 92 | return formatted_transcription 93 | 94 | def transcription_to_json_string(self, raw_transcription): 95 | # Convert raw transcription with timestamps to a string formatted like JSON 96 | transcription_data = [{"word": word, "start_time": start_time, "end_time": end_time} for word, start_time, end_time in raw_transcription] 97 | return json.dumps(transcription_data, indent=2) 98 | 99 | def correct_transcription_with_language_model(self, raw_transcription,spell_check_language): 100 | 101 | # Mapping of full language names to their ISO language codes 102 | language_code_map = { 103 | "English": "en", 104 | "Spanish": "es", 105 | "French": "fr", 106 | "Portuguese": "pt", 107 | "German": "de", 108 | "Italian": "it", 109 | "Russian": "ru", 110 | "Arabic": "ar", 111 | "Basque": "eu", 112 | "Latvian": "lv", 113 | "Dutch": "nl" 114 | } 115 | 116 | language_code = language_code_map.get(spell_check_language, "en") # Default to English if not found 117 | 118 | try: 119 | from spellchecker import SpellChecker 120 | spell = SpellChecker(language=language_code) # Specify English language 121 | except ImportError: 122 | print("SpellChecker module is NOT accessible.") 123 | # Return the original transcription if spellchecker is not available 124 | return raw_transcription 125 | 126 | # Correct each word in the transcription 127 | corrected_transcription = [] 128 | for word, start_time, end_time in raw_transcription: 129 | # Attempt to correct the word 130 | corrected_word = spell.correction(word) 131 | # Use the original word if no correction is found or if the correction returns None 132 | corrected_word = corrected_word if corrected_word else word 133 | corrected_transcription.append((corrected_word.upper(), start_time, end_time)) 134 | 135 | return corrected_transcription 136 | 137 | 138 | def transcribe_with_timestamps(self, audio_array,wave2vec_model): 139 | model = Wav2Vec2ForCTC.from_pretrained(wave2vec_model) 140 | processor = Wav2Vec2Processor.from_pretrained(wave2vec_model) 141 | 142 | inputs = processor(audio_array, sampling_rate=16000, return_tensors="pt", padding=True) 143 | 144 | with torch.no_grad(): 145 | logits = model(inputs.input_values).logits 146 | 147 | predicted_ids = torch.argmax(logits, dim=-1) 148 | 149 | # Calculate timestamps 150 | raw_timestamps = self.calculate_timestamps(predicted_ids, inputs.input_values.shape[1], processor) 151 | 152 | # Filter out padding tokens from timestamps 153 | timestamps = [(token, time) for token, time in raw_timestamps if token != ""] 154 | 155 | # Group tokens into words 156 | word_timestamps = self.group_timestamps_into_words(timestamps) 157 | 158 | return word_timestamps 159 | 160 | def group_timestamps_into_words(self, filtered_timestamps): 161 | words = [] 162 | current_word = [] 163 | for token, time in filtered_timestamps: 164 | if token == '|': 165 | if current_word: 166 | start_time = current_word[0][1] 167 | end_time = current_word[-1][1] 168 | word_string = ''.join([t[0] for t in current_word]) # Concatenate tokens into a single string 169 | words.append((word_string, start_time, end_time)) 170 | current_word = [] 171 | else: 172 | current_word.append((token, time)) 173 | 174 | # Check if there are remaining tokens in the last word 175 | if current_word: 176 | start_time = current_word[0][1] 177 | end_time = current_word[-1][1] 178 | word_string = ''.join([t[0] for t in current_word]) # Concatenate tokens into a single string 179 | words.append((word_string, start_time, end_time)) 180 | 181 | return words 182 | 183 | def calculate_timestamps(self, predicted_ids, input_length, processor): 184 | # Approximate stride (20ms window for 16kHz audio) 185 | stride = int(0.02 * 16000) 186 | 187 | timestamps = [] 188 | current_time = 0.0 189 | 190 | for idx in range(predicted_ids.shape[1]): 191 | if predicted_ids[0, idx].item() != -100: # Skip padding tokens 192 | current_time = (stride * idx) / 16000 # Convert from samples to seconds 193 | token = processor.tokenizer.convert_ids_to_tokens(predicted_ids[0, idx].item()) 194 | timestamps.append((token, current_time)) 195 | 196 | return timestamps 197 | 198 | def group_tokens_to_words(self, timestamps): 199 | word_timestamps = [] 200 | current_word = "" 201 | word_start_time = None 202 | 203 | for token, time in timestamps: 204 | if token == "": 205 | continue 206 | 207 | if token.startswith("▁"): # Indicates start of a new word 208 | if current_word: 209 | # Complete the current word and reset for the next word 210 | word_end_time = time 211 | word_timestamps.append((current_word, word_start_time, word_end_time)) 212 | current_word = "" 213 | # Start a new word, remove the "▁" character 214 | current_word = token[1:] 215 | word_start_time = time 216 | else: 217 | # Continue building the current word 218 | current_word += token 219 | 220 | # Add the last word if present 221 | if current_word: 222 | word_timestamps.append((current_word, word_start_time, timestamps[-1][1])) 223 | 224 | return word_timestamps 225 | 226 | def get_wav2vec2_models(): 227 | # Query Hugging Face Models API for Wav2Vec2 models 228 | url = "https://huggingface.co/api/models?search=wav2vec2" 229 | response = requests.get(url) 230 | models = response.json() 231 | 232 | # Extract model names 233 | model_names = [model['modelId'] for model in models] 234 | return model_names 235 | 236 | 237 | @staticmethod 238 | def audiofile_to_numpy(file_path, sr=16000): 239 | try: 240 | audio, _ = librosa.load(file_path, sr=sr) 241 | return audio 242 | except Exception as e: 243 | print("Error loading audio file:", e) 244 | return None 245 | 246 | -------------------------------------------------------------------------------- /nodes/string2file_node.py: -------------------------------------------------------------------------------- 1 | import os 2 | from pathlib import Path 3 | import folder_paths 4 | 5 | class string2file: 6 | 7 | def __init__(self): 8 | pass 9 | 10 | @classmethod 11 | def INPUT_TYPES(cls): 12 | return { 13 | "required": { 14 | "filename_prefix": ("STRING", {"default": "text\\text"}), 15 | "string": ("STRING", {"forceInput": True}), 16 | }, 17 | "hidden": { 18 | "unique_id": "UNIQUE_ID", 19 | "extra_pnginfo": "EXTRA_PNGINFO" 20 | } 21 | 22 | } 23 | 24 | INPUT_IS_LIST = True 25 | CATEGORY = "💠 Mana Nodes" 26 | RETURN_TYPES = () 27 | RETURN_NAMES = () 28 | FUNCTION = "run" 29 | OUTPUT_NODE = True 30 | 31 | def run(self, string,unique_id = None, extra_pnginfo=None, **kwargs): 32 | 33 | full_path = self.construct_text_path(kwargs) 34 | 35 | # Write the string to the file 36 | with open(full_path, 'w') as file: 37 | file.write(string[0]) 38 | 39 | if unique_id and extra_pnginfo and "workflow" in extra_pnginfo[0]: 40 | workflow = extra_pnginfo[0]["workflow"] 41 | node = next((x for x in workflow["nodes"] if str(x["id"]) == unique_id[0]), None) 42 | if node: 43 | node["widgets_values"] = [string] 44 | 45 | return {"ui": {"text": string}, "result": (string,)} 46 | 47 | def construct_text_path(self, kwargs): 48 | base_directory = folder_paths.get_output_directory() 49 | filename_prefix = os.path.normpath(kwargs['filename_prefix'][0]) 50 | full_path = os.path.join(base_directory, filename_prefix) 51 | 52 | # Ensure the path ends with .mp4 53 | if not full_path.endswith('.txt'): 54 | full_path += '.txt' 55 | 56 | # Increment filename if it already exists 57 | counter = 1 58 | while os.path.exists(full_path): 59 | # Construct a new path with an incremented number 60 | new_filename = f"{filename_prefix}_{counter}.txt" 61 | full_path = os.path.join(base_directory, new_filename) 62 | counter += 1 63 | 64 | # Create the directory if it does not exist 65 | Path(os.path.dirname(full_path)).mkdir(parents=True, exist_ok=True) 66 | 67 | return full_path -------------------------------------------------------------------------------- /nodes/text2speech_node.py: -------------------------------------------------------------------------------- 1 | from transformers import pipeline 2 | import scipy.io.wavfile 3 | from pathlib import Path 4 | import os 5 | import folder_paths 6 | 7 | class text2speech: 8 | 9 | def __init__(self): 10 | pass 11 | 12 | @classmethod 13 | def INPUT_TYPES(cls): 14 | return { 15 | "required": { 16 | "text": ("STRING", {"display": "text", "placeholder": "[laughter]\n[laughs]\n[sighs]\n[music]\n[gasps]\n[clears throat]\n— or … for hesitations\n♪ for song lyrics\nCapitalization for emphasis of a word\nMAN/WOMAN: for bias towards speaker", "multiline": True}), "filename_prefix": ("STRING", {"display": "text", "default": "audio\\audio"}) 17 | }, 18 | } 19 | 20 | CATEGORY = "💠 Mana Nodes" 21 | RETURN_TYPES = ("STRING",) 22 | RETURN_NAMES = ("audio_file",) 23 | FUNCTION = "run" 24 | OUTPUT_NODE = True 25 | 26 | def run(self, text, **kwargs): 27 | 28 | full_path = os.path.join(folder_paths.get_output_directory(), os.path.normpath(kwargs['filename_prefix'])) 29 | if not full_path.endswith('.wav'): 30 | full_path += '.wav' 31 | Path(os.path.dirname(full_path)).mkdir(parents=True, exist_ok=True) 32 | 33 | synthesizer = pipeline("text-to-speech", "suno/bark") 34 | speech = synthesizer(text, forward_params={"do_sample": True}) 35 | 36 | audio_waveform = speech['audio'] 37 | if audio_waveform.ndim == 2: 38 | # Transpose if it's in the wrong shape (num_channels, num_samples) 39 | audio_waveform = audio_waveform.T 40 | 41 | scipy.io.wavfile.write(full_path, rate=speech['sampling_rate'], data=audio_waveform) 42 | 43 | full_path_to_audio = os.path.abspath(full_path) 44 | return (full_path_to_audio,) 45 | 46 | -------------------------------------------------------------------------------- /nodes/text_graphic_element_node.py: -------------------------------------------------------------------------------- 1 | from matplotlib import font_manager 2 | import os 3 | import json 4 | 5 | class text_graphic_element: 6 | 7 | FONTS = {} 8 | FONT_NAMES = [] 9 | 10 | def __init__(self): 11 | pass 12 | 13 | @classmethod 14 | def system_font_names(self): 15 | mgr = font_manager.FontManager() 16 | return {font.name: font.fname for font in mgr.ttflist} 17 | 18 | @classmethod 19 | def get_font_files(self, font_dir): 20 | extensions = ['.ttf', '.otf', '.woff', '.woff2'] 21 | return [os.path.join(font_dir, f) for f in os.listdir(font_dir) 22 | if os.path.isfile(os.path.join(font_dir, f)) and f.endswith(tuple(extensions))] 23 | 24 | @classmethod 25 | def setup_font_directories(self): 26 | script_dir = os.path.dirname(os.path.dirname(__file__)) 27 | custom_font_files = [] 28 | for dir_name in ['font', 'font_files']: 29 | font_dir = os.path.join(script_dir, dir_name) 30 | if os.path.exists(font_dir): 31 | custom_font_files.extend(self.get_font_files(font_dir)) 32 | return custom_font_files 33 | 34 | @classmethod 35 | def combined_font_list(self): 36 | system_fonts = self.system_font_names() 37 | custom_font_files = self.setup_font_directories() 38 | 39 | # Create a dictionary for custom fonts mapping font file base names to their paths 40 | custom_fonts = {os.path.splitext(os.path.basename(f))[0]: f for f in custom_font_files} 41 | 42 | # Merge system_fonts and custom_fonts dictionaries 43 | all_fonts = {**system_fonts, **custom_fonts} 44 | return all_fonts 45 | 46 | @classmethod 47 | def INPUT_TYPES(cls): 48 | cls.FONTS = cls.combined_font_list() 49 | cls.FONT_NAMES = sorted(cls.FONTS.keys()) 50 | return { 51 | "required": { 52 | "font_file": (cls.FONT_NAMES, {"default": cls.FONT_NAMES[0]}), 53 | "font_size": ("INT", {"default": 75, "min": 1, "step": 1, "display": "number"}), 54 | "font_color": ("STRING", {"default": "white", "display": "text"}), 55 | "kerning": ("INT", {"default": 0, "step": 1, "display": "number"}), 56 | "border_width": ("INT", {"default": 0, "min": 0, "step": 1, "display": "number"}), 57 | "border_color": ("STRING", {"default": "grey", "display": "text"}), 58 | "shadow_color": ("STRING", {"default": "grey", "display": "text"}), 59 | "shadow_offset_x": ("INT", {"default": 0, "min": 0, "step": 1, "display": "number"}), 60 | "shadow_offset_y": ("INT", {"default": 0, "min": 0, "step": 1, "display": "number"}), 61 | "x_offset": ("INT", {"default": 0, "step": 1, "display": "number"}), 62 | "y_offset": ("INT", {"default": 0, "step": 1, "display": "number"}), 63 | "rotation": ("INT", {"default": 0, "min": -360, "max": 360, "step": 1}), 64 | "rotation_anchor_x": ("INT", {"default": 0, "step": 1}), 65 | "rotation_anchor_y": ("INT", {"default": 0, "step": 1}), 66 | }, 67 | } 68 | 69 | CATEGORY = "💠 Mana Nodes/⚙️ Generator Settings" 70 | RETURN_TYPES = ("TEXT_GRAPHIC_ELEMENT",) 71 | RETURN_NAMES = ("font",) 72 | FUNCTION = "run" 73 | #INPUT_IS_LIST = True 74 | 75 | def parse_int_or_json(self, value): 76 | """Parse the input as JSON if it's a string in JSON format, otherwise return as is.""" 77 | if isinstance(value, str): 78 | # Extracting and removing the $animation_reset part 79 | if '$' in value: 80 | parts = value.split('$', 1) 81 | value = parts[0] # JSON part 82 | animation_reset = parts[1] # animation_reset part 83 | else: 84 | animation_reset = None 85 | 86 | try: 87 | json_value = json.loads(value) 88 | except json.JSONDecodeError: 89 | json_value = value 90 | 91 | return json_value, animation_reset 92 | 93 | return value, None 94 | 95 | def process_color_input(self, border_color): 96 | if isinstance(border_color, str): 97 | return (border_color, None) 98 | return border_color 99 | 100 | def run(self, **kwargs): 101 | settings_string = kwargs.get('scheduled_values', '{}') 102 | json_settings, animation_reset_from_string = self.parse_int_or_json(settings_string) 103 | settings = { 104 | 'font_file': json_settings.get('font_file', kwargs.get('font_file')), 105 | 'font_color': self.process_color_input(kwargs.get('font_color')), 106 | 'kerning': json_settings.get('kerning', self.parse_int_or_json(kwargs.get('kerning'))), 107 | 'border_width': json_settings.get('border_width', self.parse_int_or_json(kwargs.get('border_width'))), 108 | 'border_color': self.process_color_input(kwargs.get('border_color')), 109 | 'shadow_color': self.process_color_input(kwargs.get('shadow_color')), 110 | 'shadow_offset_x': json_settings.get('shadow_offset_x', self.parse_int_or_json(kwargs.get('shadow_offset_x'))), 111 | 'shadow_offset_y': json_settings.get('shadow_offset_y', self.parse_int_or_json(kwargs.get('shadow_offset_y'))), 112 | 'font_size': json_settings.get('font_size', self.parse_int_or_json(kwargs.get('font_size'))), 113 | 'x_offset': json_settings.get('x_offset', self.parse_int_or_json(kwargs.get('x_offset'))), 114 | 'y_offset': json_settings.get('y_offset', self.parse_int_or_json(kwargs.get('y_offset'))), 115 | 'rotation': json_settings.get('rotation', self.parse_int_or_json(kwargs.get('rotation'))), 116 | 'rotation_anchor_x': json_settings.get('rotation_anchor_x', self.parse_int_or_json(kwargs.get('rotation_anchor_x'))), 117 | 'rotation_anchor_y': json_settings.get('rotation_anchor_y', self.parse_int_or_json(kwargs.get('rotation_anchor_y'))), 118 | } 119 | 120 | return (settings,) 121 | -------------------------------------------------------------------------------- /nodes/video2audio_node.py: -------------------------------------------------------------------------------- 1 | import os 2 | import cv2 3 | import torch 4 | import hashlib 5 | import folder_paths 6 | from ..helpers.utils import ensure_opencv, pil2tensor 7 | from PIL import Image 8 | from pathlib import Path 9 | from moviepy.editor import VideoFileClip 10 | 11 | class video2audio: 12 | 13 | def __init__(self): 14 | pass 15 | 16 | @classmethod 17 | def INPUT_TYPES(cls): 18 | input_dir = os.path.join(folder_paths.get_input_directory(), "video") 19 | os.makedirs(input_dir, exist_ok=True) 20 | files = [f"video/{f}" for f in os.listdir(input_dir) if os.path.isfile(os.path.join(input_dir, f))] 21 | return { 22 | "required": { 23 | "video": (sorted(files), {"mana_video_upload": True}), 24 | "frame_limit": ("INT", {"default": 16, "min": 1, "max": 10240, "step": 1}), 25 | "frame_start": ("INT", {"default": 0, "min": 0, "max": 0xFFFFFFFF, "step": 1}), 26 | "filename_prefix": ("STRING", {"default": "audio\\audio"}) 27 | }, 28 | "optional": {} 29 | } 30 | 31 | CATEGORY = "💠 Mana Nodes" 32 | RETURN_TYPES = ("IMAGE", "STRING","INT", "INT", "INT","INT",) 33 | RETURN_NAMES = ("images", "audio_file","fps","frame_count", "height", "width",) 34 | FUNCTION = "run" 35 | 36 | def run(self, **kwargs): 37 | video_path = folder_paths.get_annotated_filepath(kwargs['video']) 38 | frames, width, height = self.extract_frames(video_path, kwargs) 39 | video_path = Path(video_path) 40 | audio, fps = self.extract_audio_with_moviepy(video_path, kwargs) 41 | if not frames: 42 | raise ValueError("No frames could be extracted from the video.") 43 | if not audio: 44 | audio = "No audio track in the video." 45 | return (torch.cat(frames, dim=0), audio, fps, len(frames), height, width,) 46 | 47 | def extract_audio_with_moviepy(self, video_path, kwargs): 48 | # Convert WindowsPath object to string 49 | video_file_path_str = str(video_path) 50 | 51 | # Load the video file 52 | video = VideoFileClip(video_file_path_str) 53 | 54 | # Check if the video has an audio track 55 | if video.audio is None: 56 | return None, video.fps 57 | 58 | # Calculate start and end time in seconds 59 | fps = video.fps # frames per second 60 | start_time = kwargs['frame_start'] / fps 61 | end_time = (kwargs['frame_start'] + kwargs['frame_limit']) / fps 62 | 63 | full_path = os.path.join(folder_paths.get_output_directory(), os.path.normpath(kwargs['filename_prefix'])) 64 | if not full_path.endswith('.wav'): 65 | full_path += '.wav' 66 | Path(os.path.dirname(full_path)).mkdir(parents=True, exist_ok=True) 67 | full_path_to_audio = os.path.abspath(full_path) 68 | 69 | # Extract the specific audio segment 70 | audio = video.subclip(start_time, end_time).audio 71 | audio.write_audiofile(full_path) 72 | fps = video.fps 73 | 74 | return full_path_to_audio, fps 75 | 76 | def extract_frames(self, video_path, kwargs): 77 | ensure_opencv() 78 | video = cv2.VideoCapture(video_path) 79 | 80 | width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH)) 81 | height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT)) 82 | 83 | video.set(cv2.CAP_PROP_POS_FRAMES, kwargs['frame_start']) 84 | 85 | frames = [] 86 | for i in range(kwargs['frame_limit']): 87 | ret, frame = video.read() 88 | if ret: 89 | frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) 90 | frames.append(pil2tensor(Image.fromarray(frame))) 91 | else: 92 | break 93 | 94 | video.release() 95 | return frames, width, height 96 | 97 | @classmethod 98 | def IS_CHANGED(cls, video, *args, **kwargs): 99 | video_path = folder_paths.get_annotated_filepath(video) 100 | m = hashlib.sha256() 101 | with open(video_path, "rb") as f: 102 | m.update(f.read()) 103 | return m.digest().hex() 104 | 105 | @classmethod 106 | def VALIDATE_INPUTS(cls, video, *args, **kwargs): 107 | if not folder_paths.exists_annotated_filepath(video): 108 | return "Invalid video file: {}".format(video) 109 | return True 110 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | numpy 2 | torch 3 | torchvision 4 | Pillow 5 | moviepy 6 | librosa 7 | transformers 8 | opencv-python-headless 9 | requests 10 | matplotlib 11 | scipy 12 | pyspellchecker 13 | -------------------------------------------------------------------------------- /web/js/scheduled_values.js: -------------------------------------------------------------------------------- 1 | import { app } from "../../../scripts/app.js"; 2 | function loadChartJs(callback) { 3 | const script = document.createElement('script'); 4 | script.src = 'https://cdn.jsdelivr.net/npm/chart.js'; 5 | script.onload = () => { 6 | const pluginScript = document.createElement('script'); 7 | pluginScript.src = 'https://cdn.jsdelivr.net/npm/chartjs-plugin-zoom@1.0.1/dist/chartjs-plugin-zoom.min.js'; 8 | pluginScript.onload = callback; 9 | document.head.appendChild(pluginScript); 10 | }; 11 | document.head.appendChild(script); 12 | } 13 | function loadBootstrapCss() { 14 | const link = document.createElement('link'); 15 | link.href = 'https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css'; 16 | link.rel = 'stylesheet'; 17 | document.head.appendChild(link); 18 | const link2 = document.createElement('link'); 19 | link2.href = 'https://cdnjs.cloudflare.com/ajax/libs/bootstrap-icons/1.4.0/font/bootstrap-icons.min.css'; 20 | link2.rel = 'stylesheet'; 21 | document.head.appendChild(link2); 22 | 23 | } 24 | function chainCallback(object, property, callback) { 25 | if (object == undefined) { 26 | return; 27 | } 28 | if (property in object) { 29 | const originalCallback = object[property]; 30 | object[property] = function () { 31 | const result = originalCallback.apply(this, arguments); 32 | callback.apply(this, arguments); 33 | return result; 34 | }; 35 | } else { 36 | object[property] = callback; 37 | } 38 | } 39 | 40 | class TimelineWidget { 41 | constructor(node, id) { 42 | this.node = node; 43 | this.keyframes = []; 44 | this.widgets = node.widgets; 45 | this.maxX = 20; // Default maxX 46 | this.maxY = 100; // Default maxY 47 | this.prevMaxX = 1; // Previous value of maxX 48 | this.prevValueRange = 1; 49 | this.pointsDisplay = null; 50 | this.generateButton = null; 51 | this.deleteButton = null; 52 | this.generatedKeyframes = []; 53 | this.id = id; 54 | this.createChartContainer(); 55 | } 56 | 57 | createChartContainer() { 58 | this.chartContainer = document.createElement('div'); 59 | this.chartContainer.style.height = '200px'; 60 | this.chartContainer.style.width = '200px'; 61 | 62 | this.node.addDOMWidget("chart", "custom", this.chartContainer, {}); 63 | 64 | } 65 | 66 | updateGenerateButtonState() { 67 | if (this.generateButton != null && this.deleteButton != null){ 68 | this.generateButton.disabled = this.keyframes.length < 2; 69 | this.deleteButton.disabled = this.generatedKeyframes.length === 0; 70 | } 71 | 72 | } 73 | createPointsDisplay() { 74 | // Ensure pointsDisplay is created and has a defined size 75 | this.pointsDisplay = document.createElement('div'); 76 | this.pointsDisplay.classList.add('points-display', 'd-flex', 'flex-wrap', 'justify-content-start', 'align-items-center'); 77 | this.pointsDisplay.style.borderRadius = '15px'; 78 | this.pointsDisplay.style.border = '1px solid #C8C8C8'; 79 | this.pointsDisplay.style.backgroundColor = '#353535'; 80 | this.pointsDisplay.style.padding = '10px'; 81 | this.pointsDisplay.style.marginTop = '5px'; // Add some space above the container 82 | this.pointsDisplay.style.minHeight = '50px'; // Make sure there is enough height for buttons 83 | 84 | // Append the points display to the chart container 85 | this.chartContainer.appendChild(this.pointsDisplay); 86 | 87 | // Update the button states accordingly 88 | } 89 | createGenerationButton() { 90 | // Create a container for buttons and the dropdown 91 | const buttonsContainer = document.createElement('div'); 92 | buttonsContainer.style.width = '100%'; 93 | buttonsContainer.style.display = 'flex'; 94 | buttonsContainer.style.marginBottom = '10px'; 95 | 96 | // Define classes that will be common to buttons and the dropdown 97 | const commonClassList = ['btn']; 98 | const commonHeight = '38px'; 99 | 100 | // Create and append the generate button 101 | this.generateButton = document.createElement('button'); 102 | this.generateButton.innerText = 'Generate Values'; 103 | this.generateButton.classList.add(...commonClassList, 'btn-secondary'); 104 | this.generateButton.style.height = commonHeight; 105 | this.generateButton.style.flex = '1'; // Add this line 106 | this.generateButton.style.marginRight = '2px'; // Add spacing between elements 107 | this.generateButton.style.padding = '5px 10px'; // Adjust padding as needed 108 | this.generateButton.onclick = () => this.generateInBetweenValues(); 109 | // add border radius to the button 110 | this.generateButton.style.borderRadius = '5px'; 111 | buttonsContainer.appendChild(this.generateButton); 112 | 113 | // Create and append the delete button 114 | this.deleteButton = document.createElement('button'); 115 | this.deleteButton.innerText = 'Delete Generated'; 116 | this.deleteButton.classList.add(...commonClassList, 'btn-danger'); 117 | this.deleteButton.style.height = commonHeight; 118 | this.deleteButton.style.flex = '1'; // Add this line 119 | // add border radius to the button 120 | this.deleteButton.style.borderRadius = '5px'; 121 | this.deleteButton.style.marginRight = '2px'; // Add spacing between elements 122 | this.deleteButton.style.padding = '5px 10px'; // Adjust padding as needed 123 | this.deleteButton.onclick = () => this.deleteGeneratedValues(); 124 | buttonsContainer.appendChild(this.deleteButton); 125 | 126 | 127 | // Create the nterpolation type dropdown 128 | 129 | 130 | // Append the buttons container to the pointsDisplay 131 | this.pointsDisplay.appendChild(buttonsContainer); 132 | 133 | // Update the button states accordingly 134 | this.updateGenerateButtonState(); 135 | } 136 | 137 | 138 | updatePointsDisplay() { 139 | const badges = this.pointsDisplay.querySelectorAll('.badge'); 140 | badges.forEach(badge => badge.remove()); 141 | this.keyframes.forEach((kf, index) => { 142 | const badge = document.createElement('div'); 143 | badge.classList.add('badge', 'm-1'); 144 | badge.style.border = '1px solid #666666'; 145 | badge.style.borderRadius = '25px'; 146 | badge.style.textAlign = 'center'; 147 | badge.style.paddingLeft = '15px'; 148 | badge.style.paddingRight = '-15px'; 149 | badge.style.color = '#999999'; 150 | badge.innerHTML = `frame: ${kf.x}, value: ${kf.y}`; 151 | badge.style.backgroundColor = '#222222'; 152 | badge.style.display = 'flex'; // Add this line 153 | badge.style.justifyContent = 'center'; // Add this line 154 | badge.style.alignItems = 'center'; 155 | badge.style.height = '30px'; // Adjust the value as needed 156 | const deleteButton = document.createElement('button'); 157 | deleteButton.classList.add('btn'); 158 | deleteButton.innerHTML = ''; 159 | deleteButton.style.color = '#999999'; 160 | deleteButton.style.marginLeft = '-15px'; 161 | deleteButton.onclick = () => { 162 | this.removeChartKeyframe(index); 163 | }; 164 | 165 | 166 | // Add an edit button to each badge 167 | let editButton = document.createElement('button'); 168 | editButton.innerHTML = ''; 169 | editButton.classList.add('btn'); 170 | 171 | // add minus padding to the left of the button 172 | editButton.style.color = '#999999'; 173 | 174 | // Store the this value in a variable 175 | let self = this; 176 | // Add event listener to the edit button 177 | editButton.onclick = () => { 178 | // Get the current frame and value variables of the chart point from the badge's text 179 | let badgeText = badge.innerText; 180 | let splitText = badgeText.split(', '); 181 | let match1 = splitText[0].match(/frame: ([-+]?[0-9]*\.?[0-9]+)/); 182 | let currentFrame = match1 ? match1[1] : ''; 183 | let match2 = splitText[1].match(/value: ([-+]?[0-9]*\.?[0-9]+)/); 184 | let currentValue = match2 ? match2[1] : ''; 185 | 186 | // Create labels and input fields for the frame and value 187 | let frameLabel = document.createElement('span'); 188 | frameLabel.innerText = 'frame: '; 189 | 190 | let frameInput = document.createElement('input'); 191 | frameInput.type = 'text'; 192 | frameInput.value = currentFrame; 193 | frameInput.style.width = '25px'; // Set the width dynamically 194 | frameInput.style.height = '15px'; // Adjust the value as needed 195 | 196 | 197 | let valueLabel = document.createElement('span'); 198 | valueLabel.innerText = 'value: '; 199 | 200 | let valueInput = document.createElement('input'); 201 | valueInput.type = 'text'; 202 | valueInput.value = currentValue; 203 | valueInput.style.width = '25px'; // Set the width dynamically 204 | valueInput.style.height = '15px'; // Adjust the value as needed 205 | 206 | // Create a save button 207 | let saveButton = document.createElement('button'); 208 | saveButton.classList.add('btn'); 209 | saveButton.innerHTML = ''; 210 | saveButton.style.color = '#999999'; 211 | //saveButton.style.marginRight = '20px'; 212 | saveButton.firstChild.style.fontSize = '1.5em'; // Adjust the value as needed// Adjust the value as needed 213 | 214 | // Replace the badge's innerHTML with the labels, input fields, and save button 215 | badge.innerHTML = ''; 216 | badge.appendChild(frameLabel); 217 | badge.appendChild(frameInput); 218 | badge.appendChild(valueLabel); 219 | badge.appendChild(valueInput); 220 | badge.appendChild(saveButton); 221 | 222 | // When creating the badge, store the index of the point 223 | badge.dataset.index = index; 224 | 225 | // Add event listener to the save button 226 | saveButton.onclick = () => { 227 | // Get the new frame and value from the input fields 228 | let newFrame = parseFloat(frameInput.value); 229 | let newValue = parseFloat(valueInput.value); 230 | 231 | // Get the index of the old keyframe from the badge 232 | let oldIndex = parseInt(badge.dataset.index); 233 | 234 | // Remove the old keyframe 235 | self.removeChartKeyframe(oldIndex); 236 | 237 | // Add the new keyframe 238 | self.addChartKeyframe(newFrame, newValue, oldIndex); 239 | 240 | // Update the badge's dataset with the new frame and value 241 | badge.dataset.frame = newFrame; 242 | badge.dataset.value = newValue; 243 | 244 | // Replace the input fields and save button with the new frame and value 245 | badge.innerHTML = `frame: ${newFrame}, value: ${newValue}`; 246 | 247 | // Append the edit and delete buttons after updating the badge's innerHTML 248 | badge.appendChild(editButton); 249 | badge.appendChild(deleteButton); 250 | }; 251 | }; 252 | 253 | badge.appendChild(editButton); 254 | badge.appendChild(deleteButton); 255 | this.pointsDisplay.appendChild(badge); 256 | }); 257 | 258 | } 259 | 260 | generateInBetweenValues() { 261 | if (this.keyframes.length < 2) return; // Safety check 262 | 263 | let easing_type_widget = this.widgets.find(w => w.name === "easing_type").value || "linear"; 264 | 265 | // Clear any previously generated keyframes 266 | this.generatedKeyframes = []; 267 | 268 | // Make sure the keyframes are sorted by the frame number 269 | this.keyframes.sort((a, b) => a.x - b.x); 270 | 271 | // The first and the last keyframes should always be part of the generated keyframes 272 | this.generatedKeyframes.push(this.keyframes[0]); 273 | 274 | // Helper functions for interpolation methods 275 | const easings = { 276 | linear: (t, b, c, d) => c * t / d + b, 277 | easeInQuad: (t, b, c, d) => c * (t /= d) * t + b, 278 | easeOutQuad: (t, b, c, d) => -c * (t /= d) * (t - 2) + b, 279 | easeInOutQuad: (t, b, c, d) => { 280 | t /= d / 2; 281 | if (t < 1) return c / 2 * t * t + b; 282 | t--; 283 | return -c / 2 * (t * (t - 2) - 1) + b; 284 | }, 285 | easeInCubic: (t, b, c, d) => c * Math.pow(t / d, 3) + b, 286 | easeOutCubic: (t, b, c, d) => c * (Math.pow(t / d - 1, 3) + 1) + b, 287 | easeInOutCubic: (t, b, c, d) => { 288 | t /= d / 2; 289 | if (t < 1) return c / 2 * Math.pow(t, 3) + b; 290 | t -= 2; 291 | return c / 2 * (Math.pow(t, 3) + 2) + b; 292 | }, 293 | easeInQuart: (t, b, c, d) => c * Math.pow(t / d, 4) + b, 294 | easeOutQuart: (t, b, c, d) => -c * (Math.pow(t / d - 1, 4) - 1) + b, 295 | easeInOutQuart: (t, b, c, d) => { 296 | t /= d / 2; 297 | if (t < 1) return c / 2 * Math.pow(t, 4) + b; 298 | t -= 2; 299 | return -c / 2 * (Math.pow(t, 4) - 2) + b; 300 | }, 301 | easeInQuint: (t, b, c, d) => c * Math.pow(t / d, 5) + b, 302 | easeOutQuint: (t, b, c, d) => c * (Math.pow(t / d - 1, 5) + 1) + b, 303 | easeInOutQuint: (t, b, c, d) => { 304 | t /= d / 2; 305 | if (t < 1) return c / 2 * Math.pow(t, 5) + b; 306 | t -= 2; 307 | return c / 2 * (Math.pow(t, 5) + 2) + b; 308 | }, 309 | exponential: (t, b, c, d) => t === 0 ? b : c * Math.pow(2, 10 * (t / d - 1)) + b, 310 | }; 311 | 312 | // Use a default interpolation method if the selected one isn't available 313 | 314 | const interpolate = easings[easing_type_widget] || easings.linear; 315 | 316 | // Perform interpolation based on the selected type 317 | for (let i = 0; i < this.keyframes.length - 1; i++) { 318 | const startFrame = this.keyframes[i]; 319 | const endFrame = this.keyframes[i + 1]; 320 | const frameDiff = endFrame.x - startFrame.x; 321 | const valueDiff = endFrame.y - startFrame.y; 322 | 323 | for (let frame = startFrame.x + 1; frame < endFrame.x; frame++) { 324 | const t = frame - startFrame.x; 325 | const interpolatedValue = interpolate(t, startFrame.y, valueDiff, frameDiff); 326 | const roundedValue = Math.round(interpolatedValue); 327 | this.generatedKeyframes.push({ x: frame, y: roundedValue }); 328 | } 329 | 330 | // Always include the end frame 331 | this.generatedKeyframes.push(endFrame); 332 | 333 | } 334 | // Update the chart with two datasets: one for user keyframes, one for generated keyframes 335 | this.chart.data.datasets = [ 336 | { 337 | label: 'User Values', 338 | data: this.keyframes, 339 | fill: false, 340 | borderColor: 'rgb(75, 192, 192)', 341 | tension: 0, 342 | pointRadius: 5, 343 | pointStyle: 'rectRot', 344 | showLine: true 345 | }, 346 | { 347 | label: 'Generated Values', 348 | data: this.generatedKeyframes, 349 | fill: false, 350 | borderColor: 'rgb(255, 159, 64)', 351 | tension: 0, 352 | pointRadius: 5, 353 | pointStyle: 'circle', 354 | showLine: true 355 | } 356 | ]; 357 | 358 | 359 | 360 | localStorage.setItem('savedGeneratedKeyframes_'+ this.id, JSON.stringify(this.generatedKeyframes)); 361 | 362 | this.chart.update(); 363 | 364 | // Refresh the points display and buttons 365 | this.updatePointsDisplay(); 366 | this.updateGenerateButtonState(); 367 | } 368 | 369 | deleteGeneratedValues() { 370 | this.generatedKeyframes = []; 371 | if (this.chart.data.datasets.length > 1) { 372 | this.chart.data.datasets[1].data = []; 373 | this.chart.update(); 374 | } 375 | this.updateGenerateButtonState(); 376 | this.updatePointsDisplay(); 377 | } 378 | 379 | removeChartKeyframe(index) { 380 | this.keyframes.splice(index, 1); 381 | this.updateChartData(); 382 | this.updatePointsDisplay(); 383 | this.updateGenerateButtonState(); 384 | if(this.keyframes.length == 0){ 385 | this.chartContainer.removeChild(this.pointsDisplay); 386 | this.deleteGeneratedValues() 387 | } 388 | } 389 | updateChartData() { 390 | this.keyframes.sort((a, b) => a.x - b.x); 391 | this.chart.data.datasets[0].data = this.keyframes.map(kf => ({ x: kf.x, y: kf.y })); 392 | 393 | //this.generatedKeyframes.sort((a, b) => a.x - b.x); 394 | //this.chart.data.datasets[1].data = this.generatedKeyframes.map(kf => ({ x: kf.x, y: kf.y })); 395 | this.chart.update(); 396 | } 397 | 398 | updateTicks(maxX, valueRange) { 399 | this.maxX = maxX; 400 | this.maxY = Math.abs(valueRange); 401 | 402 | if (this.chart) { 403 | // Capture the current zoom state 404 | const xScale = this.chart.scales['x']; 405 | const yScale = this.chart.scales['y']; 406 | const xMin = xScale.min; 407 | const xMax = xScale.max; 408 | const yMin = yScale.min; 409 | const yMax = yScale.max; 410 | 411 | // Update the scales 412 | this.chart.options.scales.x.max = maxX; 413 | this.chart.options.scales.y.min = -this.maxY; 414 | this.chart.options.scales.y.max = this.maxY; 415 | 416 | // Sort and update keyframes 417 | this.keyframes.sort((a, b) => a.x - b.x); 418 | this.chart.data.datasets[0].data = this.keyframes.map(kf => ({ x: kf.x, y: kf.y })); 419 | 420 | // Reapply the zoom state 421 | xScale.min = xMin; 422 | xScale.max = xMax; 423 | yScale.min = yMin; 424 | yScale.max = yMax; 425 | 426 | this.chart.update(); 427 | 428 | } 429 | } 430 | updateStepSize(stepSize) { 431 | if (this.chart) { 432 | // Adjust the step size of the chart based on the stepSize value 433 | if (stepSize === 'auto') { 434 | this.chart.options.scales.x.ticks.autoSkip = true; 435 | this.chart.options.scales.y.ticks.autoSkip = true; 436 | } else { 437 | this.chart.options.scales.x.ticks.autoSkip = false; 438 | this.chart.options.scales.x.ticks.stepSize = 1; 439 | this.chart.options.scales.y.ticks.autoSkip = false; 440 | this.chart.options.scales.y.ticks.stepSize = 1; 441 | } 442 | 443 | this.chart.update(); 444 | } 445 | } 446 | 447 | addChartKeyframe(x, y) { 448 | const keyframeIndex = this.keyframes.findIndex(kf => kf.x === x); 449 | if (keyframeIndex > -1) { 450 | this.keyframes[keyframeIndex].y = y; 451 | } else { 452 | this.keyframes.push({ x: x, y: y }); 453 | } 454 | // Sort and update keyframes 455 | this.keyframes.sort((a, b) => a.x - b.x); 456 | this.chart.data.datasets[0].data = this.keyframes.map(kf => ({ x: kf.x, y: kf.y })); 457 | // Update chart without losing zoom state 458 | this.chart.update(); 459 | if(this.pointsDisplay != null ) { 460 | this.updatePointsDisplay(); 461 | } 462 | else { 463 | 464 | this.createPointsDisplay(); 465 | this.createGenerationButton(); 466 | this.updatePointsDisplay(); 467 | 468 | } 469 | if(!this.pointsDisplay.parentNode){ 470 | this.chartContainer.appendChild(this.pointsDisplay); 471 | } 472 | 473 | this.updateGenerateButtonState(); 474 | localStorage.setItem('savedKeyframes_' + this.id, JSON.stringify(this.keyframes)); 475 | 476 | } 477 | 478 | calculateValuesFromClick(event, canvas) { 479 | const rect = canvas.getBoundingClientRect(); 480 | const scaleX = canvas.width / rect.width; 481 | const scaleY = canvas.height / rect.height; 482 | const canvasX = (event.clientX - rect.left) * scaleX; 483 | const canvasY = (event.clientY - rect.top) * scaleY; 484 | const scales = this.chart.scales; 485 | const xScaleKey = Object.keys(scales).find(key => scales[key].axis === 'x'); 486 | const yScaleKey = Object.keys(scales).find(key => scales[key].axis === 'y'); 487 | if (!scales[xScaleKey] || !scales[yScaleKey]) { 488 | console.error('Chart scales not found.'); 489 | return { x: 0, y: 0 }; 490 | } 491 | const xValue = scales[xScaleKey].getValueForPixel(canvasX); 492 | const yValue = scales[yScaleKey].getValueForPixel(canvasY); 493 | 494 | if(xValue < 1){ 495 | xValue=1; 496 | } 497 | if(xValue > this.maxX){ 498 | xValue = this.maxX; 499 | } 500 | return { x: Math.round(xValue), y: Math.round(yValue) }; 501 | } 502 | 503 | initChart(maxX, maxY) { 504 | this.maxX = maxX; 505 | this.maxY = maxY; 506 | const canvas = document.createElement('canvas'); 507 | 508 | const data = { 509 | labels: Array.from({ length: maxX }, (_, i) => i + 1), 510 | datasets: [{ 511 | label: 'User Values', 512 | data: this.keyframes, 513 | fill: false, 514 | borderColor: 'rgb(75, 192, 192)', 515 | tension: 0, 516 | pointRadius: 5, 517 | pointStyle: 'rectRot', 518 | showLine: true 519 | }, 520 | { 521 | label: 'Generated Values', 522 | data: this.generatedKeyframes, 523 | fill: false, 524 | borderColor: 'rgb(255, 159, 64)', 525 | tension: 0, 526 | pointRadius: 5, 527 | pointStyle: 'circle', 528 | showLine: true 529 | }] 530 | }; 531 | 532 | const config = { 533 | type: 'line', 534 | data: data, 535 | options: { 536 | maintainAspectRatio: false, 537 | responsive: true, 538 | layout: { 539 | padding: { 540 | right: 45 541 | } 542 | }, 543 | scales: { 544 | x: { 545 | type: 'linear', 546 | position: 'bottom', 547 | min: 1, 548 | max: Math.round(maxX), 549 | ticks: { 550 | callback: function(value) { 551 | if (Math.floor(value) === value) { 552 | return value; 553 | } 554 | }, 555 | 556 | stepSize: 1, 557 | autoSkip: false, 558 | 559 | }, 560 | title: { 561 | display: true, 562 | text: 'frames' // Replace with your x-axis label 563 | } 564 | }, 565 | y: { 566 | min: -maxY, // Set initial minimum value for y-axis 567 | max: maxY, // Set initial maximum value for y-axis 568 | ticks: { 569 | callback: function(value) { 570 | if (Math.floor(value) === value) { 571 | return value; 572 | } 573 | }, 574 | 575 | stepSize: 1, // Use the dynamic step size 576 | autoSkip: false 577 | }, 578 | title: { 579 | display: true, 580 | text: 'values' 581 | }, 582 | } 583 | }, 584 | plugins: { 585 | tooltip: { 586 | enabled: true, 587 | callbacks: { 588 | label: function(context) { 589 | return `frame = ${context.label}, value = ${context.parsed.y}`; 590 | }, 591 | title: function() { 592 | return 'scheduled value'; // Replace with your desired title 593 | } 594 | } 595 | }, 596 | zoom: { 597 | zoom: { 598 | wheel: { 599 | speed: 0.1, 600 | enabled: true, 601 | }, 602 | mode: 'y', 603 | minInterval: 1, 604 | onZoom: (context) => { 605 | const chart = context.chart; 606 | if (!chart || !chart.scales) { 607 | console.error('Chart or chart scales not found.'); 608 | return; 609 | } 610 | 611 | const yScale = chart.scales['y']; 612 | yScale.options.ticks.min = Math.round(yScale.min); 613 | yScale.options.ticks.max = Math.round(yScale.max); 614 | } 615 | } 616 | } 617 | } 618 | } 619 | }; 620 | 621 | this.chartContainer.appendChild(canvas); 622 | 623 | this.chart = new Chart(canvas.getContext('2d'), config); 624 | 625 | 626 | canvas.addEventListener('click', (event) => { 627 | const { x, y } = this.calculateValuesFromClick(event, canvas); 628 | this.addChartKeyframe(x, y); 629 | }); 630 | 631 | } 632 | 633 | } 634 | 635 | app.registerExtension({ 636 | name: "ManaNodes.scheduled_values", 637 | async beforeRegisterNodeDef(nodeType, nodeData) { 638 | if (nodeData.name === "Scheduled Values") { 639 | 640 | chainCallback(nodeType.prototype, "onConfigure", function () { 641 | const id_widget = this.widgets.find(w => w.name === "id"); 642 | if (id_widget.value == 0) { 643 | 644 | let max = 1000000; 645 | id_widget.value = Math.floor(Math.random() * max); 646 | } 647 | 648 | this.timelineWidget.id = id_widget.value; 649 | const x = this.widgets.find(w => w.name === "frame_count").value; 650 | const y = this.widgets.find(w => w.name === "value_range").value; 651 | this.timelineWidget.maxX = x; 652 | this.timelineWidget.maxY = y; 653 | this.timelineWidget.updateTicks(x, y); 654 | 655 | const step_size = this.widgets.find(w => w.name === "step_mode").value; 656 | this.timelineWidget.updateStepSize(step_size); 657 | 658 | const savedKeyframes = JSON.parse(localStorage.getItem('savedKeyframes_' + id_widget.value)); 659 | const savedGeneratedKeyframes = JSON.parse(localStorage.getItem('savedGeneratedKeyframes_' + id_widget.value)); 660 | 661 | if (savedKeyframes) { 662 | this.timelineWidget.keyframes = savedKeyframes; 663 | } 664 | 665 | if (savedGeneratedKeyframes) { 666 | this.timelineWidget.generatedKeyframes = savedGeneratedKeyframes; 667 | } 668 | }); 669 | 670 | chainCallback(nodeType.prototype, 'onNodeCreated', function () { 671 | const frame_count_widget = this.widgets.find(w => w.name === "frame_count"); 672 | const value_range_widget = this.widgets.find(w => w.name === "value_range"); 673 | 674 | let maxX = frame_count_widget ? parseInt(frame_count_widget.value, 10) : 20; 675 | let valueRange = value_range_widget ? parseInt(value_range_widget.value, 10) : 100; 676 | 677 | const timelineWidget = new TimelineWidget(this); 678 | loadChartJs(() => { 679 | timelineWidget.initChart(maxX, valueRange); 680 | }); 681 | loadBootstrapCss(); 682 | this.timelineWidget = timelineWidget; 683 | let max = 1000000; 684 | this.timelineWidget.id = Math.floor(Math.random() * max); 685 | this.widgets.find(w => w.name === "id").value = this.timelineWidget.id; 686 | 687 | }); 688 | 689 | chainCallback(nodeType.prototype, 'onDrawBackground', function () { 690 | const frame_count_widget = this.widgets.find(w => w.name === "frame_count"); 691 | const value_range_widget = this.widgets.find(w => w.name === "value_range"); 692 | 693 | let maxX = frame_count_widget ? parseInt(frame_count_widget.value, 10) : 20; 694 | let valueRange = value_range_widget ? parseInt(value_range_widget.value, 10) : 100; 695 | const step_size_widget = this.widgets.find(w => w.name === "step_mode"); 696 | let stepSize = step_size_widget ? step_size_widget.value : "single"; 697 | 698 | if (this.prevMaxX !== maxX || this.prevValueRange !== valueRange) { 699 | if (this.timelineWidget) { 700 | this.timelineWidget.updateTicks(maxX, valueRange); 701 | } 702 | this.prevMaxX = maxX; 703 | this.prevValueRange = valueRange; 704 | } 705 | 706 | if (this.stepSize !== stepSize) { 707 | if (this.timelineWidget) { 708 | this.timelineWidget.updateStepSize(stepSize); 709 | } 710 | this.stepSize = stepSize; 711 | } 712 | 713 | if (this.timelineWidget) { 714 | this.widgets.find(w => w.name === "id").value = this.timelineWidget.id; 715 | // Combine keyframes and generatedKeyframes. 716 | let combinedKeyframes = [...this.timelineWidget.keyframes, ...this.timelineWidget.generatedKeyframes]; 717 | 718 | // Sort combined array based on 'x' to ensure order. 719 | combinedKeyframes.sort((a, b) => a.x - b.x); 720 | 721 | // Remove duplicates. 722 | const uniqueKeyframes = Array.from(new Map(combinedKeyframes.map(kf => [kf.x, kf])).values()); 723 | 724 | // Set the value for the widget. 725 | this.widgets.find(w => w.name === "scheduled_values").value = JSON.stringify(uniqueKeyframes); 726 | } 727 | }); 728 | } 729 | }, 730 | }); 731 | -------------------------------------------------------------------------------- /web/js/text_preview.js: -------------------------------------------------------------------------------- 1 | import { app } from "../../../scripts/app.js"; 2 | import { ComfyWidgets } from "../../../scripts/widgets.js"; 3 | 4 | app.registerExtension({ 5 | name: "ManaNodes.string2file", 6 | async beforeRegisterNodeDef(nodeType, nodeData, app) { 7 | if (nodeData.name === "Save/Preview Text") { 8 | function populate(values) { 9 | 10 | let previewText; 11 | if (values.length === 1) { 12 | // If the function is called during execution, use the first index 13 | previewText = values[0]; 14 | } else { 15 | // If the function is called during configuration, use the third index 16 | previewText = values[2]; 17 | } 18 | let previewWidget = this.widgets.find(w => w.name === "preview"); 19 | if (!previewWidget) { 20 | // Create preview widget if it does not exist 21 | previewWidget = ComfyWidgets["STRING"](this, "preview", ["STRING", { multiline: true }], app).widget; 22 | previewWidget.inputEl.readOnly = false; 23 | previewWidget.inputEl.style.opacity = 0.6; 24 | } 25 | previewWidget.value = previewText; // Set or update the value 26 | 27 | requestAnimationFrame(() => { 28 | const sz = this.computeSize(); 29 | if (sz[0] < this.size[0]) { 30 | sz[0] = this.size[0]; 31 | } 32 | if (sz[1] < this.size[1]) { 33 | sz[1] = this.size[1]; 34 | } 35 | this.onResize?.(sz); 36 | app.graph.setDirtyCanvas(true, false); 37 | }); 38 | } 39 | 40 | const onExecuted = nodeType.prototype.onExecuted; 41 | nodeType.prototype.onExecuted = function (message) { 42 | onExecuted?.apply(this, arguments); 43 | populate.call(this, message.text); 44 | }; 45 | 46 | const onConfigure = nodeType.prototype.onConfigure; 47 | nodeType.prototype.onConfigure = function () { 48 | onConfigure?.apply(this, arguments); 49 | if (this.widgets_values?.length) { 50 | populate.call(this, this.widgets_values); 51 | } 52 | }; 53 | } 54 | }, 55 | }); -------------------------------------------------------------------------------- /web/js/vid_preview.js: -------------------------------------------------------------------------------- 1 | import { app, ANIM_PREVIEW_WIDGET } from '../../../scripts/app.js'; 2 | import { api } from "../../../scripts/api.js"; 3 | import { $el } from '../../../scripts/ui.js'; 4 | import { createImageHost } from "../../../scripts/ui/imagePreview.js" 5 | 6 | const URL_REGEX = /^(https?:\/\/|\/view\?|data:image\/)/; 7 | 8 | const style = ` 9 | .comfy-img-preview video { 10 | object-fit: contain; 11 | width: var(--comfy-img-preview-width); 12 | height: var(--comfy-img-preview-height); 13 | } 14 | `; 15 | 16 | export function chainCallback(object, property, callback) { 17 | if (object == undefined) { 18 | return; 19 | } 20 | if (property in object) { 21 | const callback_orig = object[property]; 22 | object[property] = function () { 23 | const r = callback_orig.apply(this, arguments); 24 | callback.apply(this, arguments); 25 | return r; 26 | }; 27 | } else { 28 | object[property] = callback; 29 | } 30 | }; 31 | 32 | export function formatUploadedUrl(params) { 33 | if (params.url) { 34 | return params.url; 35 | } 36 | 37 | params = { ...params }; 38 | 39 | if (!params.filename && params.name) { 40 | params.filename = params.name; 41 | delete params.name; 42 | } 43 | const url = api.apiURL("/view?" + new URLSearchParams(params)); 44 | return url; 45 | }; 46 | 47 | export function addVideoPreview(nodeType, options = {}) { 48 | const createVideoNode = (url) => { 49 | return new Promise((cb) => { 50 | const videoEl = document.createElement('video'); 51 | Object.defineProperty(videoEl, 'naturalWidth', { 52 | get: () => { 53 | return videoEl.videoWidth; 54 | }, 55 | }); 56 | Object.defineProperty(videoEl, 'naturalHeight', { 57 | get: () => { 58 | return videoEl.videoHeight; 59 | }, 60 | }); 61 | videoEl.addEventListener('loadedmetadata', () => { 62 | videoEl.controls = false; 63 | videoEl.loop = true; 64 | videoEl.muted = true; 65 | cb(videoEl); 66 | }); 67 | videoEl.addEventListener('error', () => { 68 | cb(); 69 | }); 70 | videoEl.src = url; 71 | }); 72 | }; 73 | 74 | const createImageNode = (url) => { 75 | return new Promise((cb) => { 76 | const imgEl = document.createElement('img'); 77 | imgEl.onload = () => { 78 | cb(imgEl); 79 | }; 80 | imgEl.addEventListener('error', () => { 81 | cb(); 82 | }); 83 | imgEl.src = url; 84 | }); 85 | }; 86 | 87 | nodeType.prototype.onDrawBackground = function (ctx) { 88 | if (this.flags.collapsed) return; 89 | 90 | let imageURLs = (this.images ?? []).map((i) => 91 | typeof i === 'string' ? i : formatUploadedUrl(i), 92 | ); 93 | let imagesChanged = false; 94 | 95 | if (JSON.stringify(this.displayingImages) !== JSON.stringify(imageURLs)) { 96 | this.displayingImages = imageURLs; 97 | imagesChanged = true; 98 | } 99 | 100 | if (!imagesChanged) return; 101 | if (!imageURLs.length) { 102 | this.imgs = null; 103 | this.animatedImages = false; 104 | return; 105 | } 106 | 107 | const promises = imageURLs.map((url) => { 108 | if (url.startsWith('/view')) { 109 | url = window.location.origin + url; 110 | } 111 | 112 | const u = new URL(url); 113 | const filename = 114 | u.searchParams.get('filename') || u.searchParams.get('name') || u.pathname.split('/').pop(); 115 | const ext = filename.split('.').pop(); 116 | const format = ['gif', 'webp', 'avif'].includes(ext) ? 'image' : 'video'; 117 | if (format === 'video') { 118 | return createVideoNode(url); 119 | } else { 120 | return createImageNode(url); 121 | } 122 | }); 123 | 124 | Promise.all(promises) 125 | .then((imgs) => { 126 | this.imgs = imgs.filter(Boolean); 127 | }) 128 | .then(() => { 129 | if (!this.imgs.length) return; 130 | 131 | this.animatedImages = true; 132 | const widgetIdx = this.widgets?.findIndex((w) => w.name === ANIM_PREVIEW_WIDGET); 133 | 134 | if (widgetIdx > -1) { 135 | // Replace content 136 | const widget = this.widgets[widgetIdx]; 137 | widget.options.host.updateImages(this.imgs); 138 | } else { 139 | const host = createImageHost(this); 140 | this.setSizeForImage(true); 141 | const widget = this.addDOMWidget(ANIM_PREVIEW_WIDGET, 'img', host.el, { 142 | host, 143 | getHeight: host.getHeight, 144 | onDraw: host.onDraw, 145 | hideOnZoom: false, 146 | }); 147 | widget.serializeValue = () => ({ 148 | height: host.el.clientHeight, 149 | }); 150 | 151 | widget.options.host.updateImages(this.imgs); 152 | } 153 | 154 | this.imgs.forEach((img) => { 155 | if (img instanceof HTMLVideoElement) { 156 | img.muted = true; 157 | img.autoplay = true; 158 | img.play(); 159 | } 160 | }); 161 | }); 162 | }; 163 | 164 | const { textWidget, comboWidget } = options; 165 | 166 | if (textWidget) { 167 | chainCallback(nodeType.prototype, 'onNodeCreated', function () { 168 | const pathWidget = this.widgets.find((w) => w.name === textWidget); 169 | pathWidget._value = pathWidget.value; 170 | Object.defineProperty(pathWidget, 'value', { 171 | set: (value) => { 172 | pathWidget._value = value; 173 | pathWidget.inputEl.value = value; 174 | this.images = (value ?? '').split('\n').filter((url) => URL_REGEX.test(url)); 175 | }, 176 | get: () => { 177 | return pathWidget._value; 178 | }, 179 | }); 180 | pathWidget.inputEl.addEventListener('change', (e) => { 181 | const value = e.target.value; 182 | pathWidget._value = value; 183 | this.images = (value ?? '').split('\n').filter((url) => URL_REGEX.test(url)); 184 | }); 185 | 186 | // Set value to ensure preview displays on initial add. 187 | pathWidget.value = pathWidget._value; 188 | }); 189 | } 190 | 191 | if (comboWidget) { 192 | chainCallback(nodeType.prototype, 'onNodeCreated', function () { 193 | const pathWidget = this.widgets.find((w) => w.name === comboWidget); 194 | pathWidget._value = pathWidget.value; 195 | Object.defineProperty(pathWidget, 'value', { 196 | set: (value) => { 197 | pathWidget._value = value; 198 | if (!value) { 199 | return this.images = [] 200 | } 201 | 202 | const parts = value.split("/") 203 | const filename = parts.pop() 204 | const subfolder = parts.join("/") 205 | const extension = filename.split(".").pop(); 206 | const format = (["gif", "webp", "avif"].includes(extension)) ? 'image' : 'video' 207 | this.images = [formatUploadedUrl({ filename, subfolder, type: "input", format: format })] 208 | }, 209 | get: () => { 210 | return pathWidget._value; 211 | }, 212 | }); 213 | 214 | // Set value to ensure preview displays on initial add. 215 | pathWidget.value = pathWidget._value; 216 | }); 217 | } 218 | 219 | chainCallback(nodeType.prototype, "onExecuted", function (message) { 220 | if (message?.videos) { 221 | this.images = message?.videos.map(formatUploadedUrl); 222 | if(nodeType.comfyClass === 'audio2video') { 223 | localStorage.setItem('savedVideoUrls', JSON.stringify(this.images)); 224 | } 225 | } 226 | }); 227 | 228 | // Restoring state in onConfigure 229 | chainCallback(nodeType.prototype, "onConfigure", function () { 230 | if(nodeType.comfyClass === 'audio2video') { 231 | const savedVideoUrls = JSON.parse(localStorage.getItem('savedVideoUrls')); 232 | if (savedVideoUrls) { 233 | this.images = savedVideoUrls; 234 | } 235 | } 236 | 237 | }); 238 | 239 | } 240 | 241 | app.registerExtension({ 242 | name: "ManaNodes.audio2video", 243 | init() { 244 | $el('style', { 245 | textContent: style, 246 | parent: document.head, 247 | }); 248 | }, 249 | async beforeRegisterNodeDef(nodeType, nodeData) { 250 | if (nodeData.name !== "Combine Video") { 251 | return; 252 | } 253 | 254 | addVideoPreview(nodeType); 255 | }, 256 | }); 257 | -------------------------------------------------------------------------------- /web/js/vid_upload.js: -------------------------------------------------------------------------------- 1 | import { app } from "../../../scripts/app.js"; 2 | import { api } from "../../../scripts/api.js"; 3 | 4 | import { 5 | chainCallback, 6 | addVideoPreview, 7 | } from "./vid_preview.js"; 8 | 9 | async function uploadFile(file) { 10 | try { 11 | // Wrap file in formdata so it includes filename 12 | const body = new FormData(); 13 | const new_file = new File([file], file.name, { 14 | type: file.type, 15 | lastModified: file.lastModified, 16 | }); 17 | body.append("image", new_file); 18 | body.append("subfolder", "video"); 19 | const resp = await api.fetchApi("/upload/image", { 20 | method: "POST", 21 | body, 22 | }); 23 | 24 | if (resp.status === 200 || resp.status === 201) { 25 | return resp.json(); 26 | } else { 27 | alert(`Upload failed: ${resp.statusText}`); 28 | } 29 | } catch (error) { 30 | alert(`Upload failed: ${error}`); 31 | } 32 | } 33 | 34 | function addUploadWidget(nodeType, widgetName) { 35 | chainCallback(nodeType.prototype, "onNodeCreated", function () { 36 | const pathWidget = this.widgets.find((w) => w.name === widgetName); 37 | if (pathWidget.element) { 38 | pathWidget.options.getMinHeight = () => 50; 39 | pathWidget.options.getMaxHeight = () => 150; 40 | } 41 | 42 | const fileInput = document.createElement("input"); 43 | chainCallback(this, "onRemoved", () => { 44 | fileInput?.remove(); 45 | }); 46 | 47 | Object.assign(fileInput, { 48 | type: "file", 49 | accept: "video/webm,video/mp4,video/mkv,image/gif,image/webp", 50 | style: "display: none", 51 | onchange: async () => { 52 | if (fileInput.files.length) { 53 | const params = await uploadFile(fileInput.files[0]); 54 | if (!params) { 55 | // upload failed and file can not be added to options 56 | return; 57 | } 58 | 59 | fileInput.value = ""; 60 | const filename = [params.subfolder, params.name || params.filename].filter(Boolean).join('/') 61 | pathWidget.value = filename; 62 | pathWidget.options.values.push(filename); 63 | } 64 | }, 65 | }); 66 | 67 | document.body.append(fileInput); 68 | let uploadWidget = this.addWidget( 69 | "button", 70 | "choose video to split", 71 | "image", 72 | () => { 73 | app.canvas.node_widget = null; 74 | fileInput.click(); 75 | } 76 | ); 77 | uploadWidget.options.serialize = false; 78 | }); 79 | } 80 | 81 | // Adds an upload button to the nodes 82 | app.registerExtension({ 83 | name: "ManaNodes.video2audio", 84 | async beforeRegisterNodeDef(nodeType, nodeData, app) { 85 | 86 | if (nodeData?.input?.required?.video?.[1]?.mana_video_upload === true) { 87 | 88 | addUploadWidget(nodeType, 'video'); 89 | addVideoPreview(nodeType, { comboWidget: 'video' }); 90 | } 91 | }, 92 | }); 93 | --------------------------------------------------------------------------------