├── .gitattributes
├── .github
└── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
├── .gitignore
├── README.md
├── apiExample.py
├── docs
├── api.md
├── images
│ ├── advanced_options_detection.jpg
│ ├── advanced_options_generation.jpg
│ ├── advanced_options_inpainting.jpg
│ ├── advanced_options_others.jpg
│ ├── controlnet.jpg
│ ├── defaults.jpg
│ ├── hiresfix_options_advanced.jpg
│ ├── hiresfix_options_general.jpg
│ ├── inpaint_merge.jpg
│ ├── main_screenshot.jpg
│ ├── replacer_options.jpg
│ ├── replacer_script.jpg
│ ├── replacer_video_animate_diff.jpg
│ ├── replacer_video_common.jpg
│ ├── replacer_video_frame_by_frame.jpg
│ ├── replacer_video_sparsectrl.jpg
│ └── segment_anything_options.jpg
├── options.md
├── tips.md
├── usage.md
└── video.md
├── html
└── replacer_footer.html
├── install.py
├── javascript
├── replacer.js
└── zoom-replacer-modified.js
├── metadata.ini
├── replacer
├── extensions
│ ├── animatediff.py
│ ├── arplusplus.py
│ ├── background_extensions.py
│ ├── controlnet.py
│ ├── image_comparison.py
│ ├── inpaint_difference.py
│ ├── replacer_extensions.py
│ └── soft_inpainting.py
├── generate.py
├── generation_args.py
├── hires_fix.py
├── inpaint.py
├── mask_creator.py
├── options.py
├── tools.py
├── ui
│ ├── apply_hires_fix.py
│ ├── generate_ui.py
│ ├── make_advanced_options.py
│ ├── make_hiresfix_options.py
│ ├── replacer_main_ui.py
│ ├── replacer_tab_ui.py
│ ├── tools_ui.py
│ └── video
│ │ ├── generation.py
│ │ ├── masking.py
│ │ ├── project.py
│ │ ├── replacer_video_tab_ui.py
│ │ ├── video_generation_ui.py
│ │ ├── video_masking_ui.py
│ │ ├── video_options_ui.py
│ │ └── video_project_ui.py
├── video_animatediff.py
└── video_tools.py
├── scripts
├── replacer_api.py
├── replacer_main_ui.py
└── replacer_script.py
└── style.css
/.gitattributes:
--------------------------------------------------------------------------------
1 | javascript/zoom-replacer-modified.js linguist-vendored
2 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Create a report to help us improve
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | Known issues. Do not open issue, if you faced with problem from the list:
11 |
12 | > ImportError: cannot import name 'ui_toprow' from 'modules' (unknown location)
13 |
14 | You need to update automatic1111 webui. Launch `git pull` command inside webui root
15 |
16 | > ModuleNotFoundError: No module named 'scripts.sam'
17 |
18 | You need to install https://github.com/continue-revolution/sd-webui-segment-anything
19 | Do not confuse with Inpaint Anything!
20 |
21 | > sam = sam_model_registrymodel_type
22 | > KeyError: ''
23 |
24 | Segment Anything extension currently doesn't support FastSam and Matting-Anything. Ask about this here: https://github.com/continue-revolution/sd-webui-segment-anything/issues/135
25 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature request
3 | about: Suggest an idea for this project
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Is your feature request related to a problem? Please describe.**
11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12 |
13 | **Describe the solution you'd like**
14 | A clear and concise description of what you want to happen.
15 |
16 | **Describe alternatives you've considered**
17 | A clear and concise description of any alternative solutions or features you've considered.
18 |
19 | **Additional context**
20 | Add any other context or screenshots about the feature request here.
21 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | .vscode
3 | .directory
4 | ExtensionName.txt
5 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Replacer
2 |
3 | Replacer is an extension for [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui). The goal of this extension is to automate objects masking by detection prompt, using [sd-webui-segment-anything](https://github.com/continue-revolution/sd-webui-segment-anything), and img2img inpainting in one easy to use tab. It also useful for batch inpaint, and inpaint in video with AnimateDiff
4 |
5 |
6 | 
7 |
8 | You also can draw your mask instead of or in addition to detection, and take advantage of convenient HiresFix option, and ControlNet inpainting with preserving original image resolution and aspect ratio
9 |
10 | > If you find this project useful, please star it on GitHub!
11 |
12 | ## Installation
13 | 1. Install [sd-webui-segment-anything](https://github.com/continue-revolution/sd-webui-segment-anything) extension. If it bothers you, you can hide it in the Replacer's settings. Go to tab `Extension` -> `Available` -> click `Load from` and search _"sd-webui-segment-anything"_
14 | 2. Download model [sam_hq_vit_l.pth](https://huggingface.co/lkeab/hq-sam/resolve/main/sam_hq_vit_l.pth), or others from the list bellow, and put it into `extensions/sd-webui-segment-anything/models/sam`
15 | 3. For faster hires fix, download [lcm-lora-sdv1-5](https://huggingface.co/latent-consistency/lcm-lora-sdv1-5/blob/main/pytorch_lora_weights.safetensors), rename it into `lcm-lora-sdv1-5.safetensors`, put into `models/Lora`. Or if you have already lcm lora, then change hires suffix in the extension options
16 | 4. Install this extension. Go to tab `Extension` -> `Available` -> click `Load from` and search _"Replacer"_. For AMD and Intel GPUs, and maybe something else, you maybe need to enable CPU for detection in Replacer's settings. But try first without this option. Or if your NVidia gpu has very littel vram (e.g. 2GB, I've tested) it's also handy. Go to `Settings` -> `Replacer` and enable cpu for detection
17 | 5. Reload UI
18 |
19 | If you don't want to use Video feature, that's all for you. Further steps are for Video:
20 |
21 | 1. Install [sd-webui-animatediff](https://github.com/continue-revolution/sd-webui-animatediff) and [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) extensions. You should also use `Extension` -> `Available` tab and find them there
22 | 2. Download [mm_sd15_v3.safetensors](https://huggingface.co/conrevo/AnimateDiff-A1111/resolve/main/motion_module/mm_sd15_v3.safetensors) animatediff's motion model, and put it into `extensions/sd-webui-animatediff/model` directory
23 | 3. Download [control_v11p_sd15_inpaint_fp16.safetensors](https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors) controlnet's model and put it into `models/ControlNet` directory
24 | 4. I strongly recommend you to download [mm_sd15_v3_sparsectrl_rgb.safetensors](https://huggingface.co/conrevo/AnimateDiff-A1111/resolve/main/control/mm_sd15_v3_sparsectrl_rgb.safetensors) and [mm_sd15_v3_sparsectrl_scribble.safetensors](https://huggingface.co/conrevo/AnimateDiff-A1111/resolve/main/control/mm_sd15_v3_sparsectrl_scribble.safetensors) controlnet's models. Put them also into `models/ControlNet` directory. Then you can select SparseCtrl module in ControlNet extension. The rgb one requires "none" preprocessor
25 |
26 |
27 | ##### SAM models list:
28 |
29 | SAM-HQ are the best for me. Choose it depending on your vram. Sum this model size with dino model size (694MB-938MB)
30 |
31 |
32 |
33 | 1. [SAM](https://github.com/facebookresearch/segment-anything) from Meta AI.
34 | - [2.56GB sam_vit_h](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)
35 | - [1.25GB sam_vit_l](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth)
36 | - [375MB sam_vit_b](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth)
37 |
38 | 2. [SAM-HQ](https://github.com/SysCV/sam-hq) from SysCV.
39 | - [2.57GB sam_hq_vit_h](https://huggingface.co/lkeab/hq-sam/resolve/main/sam_hq_vit_h.pth)
40 | - [1.25GB sam_hq_vit_l](https://huggingface.co/lkeab/hq-sam/resolve/main/sam_hq_vit_l.pth)
41 | - [379MB sam_hq_vit_b](https://huggingface.co/lkeab/hq-sam/resolve/main/sam_hq_vit_b.pth)
42 |
43 | 3. [MobileSAM](https://github.com/ChaoningZhang/MobileSAM) from Kyung Hee University.
44 | - [39MB mobile_sam](https://github.com/ChaoningZhang/MobileSAM/blob/master/weights/mobile_sam.pt)
45 |
46 |
47 |
48 | _FastSAM_ and _Matting-Anything_ aren't currently supported
49 |
50 |
51 |
52 | ## How does it work?
53 |
54 | First, grounding dino models detect objects you provided in the detection prompt. Then segment anything model generates contours of them. And then extension chooses randomly 1 of 3 generated masks, and inpaints it with regular inpainting method in a1111 webui
55 |
56 | When you press the "Apply hires fix" button, the extension regenerates the image with exactly the same settings, excluding upscaler_for_img2img. Then it applies inpainting with "Original" masked content mode and lower denoising but higher resolution.
57 |
58 |
59 |
60 | ## Supported extensons:
61 |
62 |
63 |
64 | 1. [Lama cleaner as masked content](https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content)
65 |
66 | 1. [Image Comparison](https://github.com/Haoming02/sd-webui-image-comparison)
67 |
68 | 1. [ControlNet](https://github.com/Mikubill/sd-webui-controlnet)
69 |
70 | 1. [AnimateDiff](https://github.com/continue-revolution/sd-webui-animatediff)
71 |
72 | 1. [ar-plusplus](https://github.com/altoiddealer/--sd-webui-ar-plusplus) (maybe works with "Aspect Ratio and Resolution Buttons" forks)
73 |
74 | 1. Other extension scripts which doesn't have control arguments, e.g. [Hardware Info in metadata](https://github.com/light-and-ray/sd-webui-hardware-info-in-metadata), [NudeNet NSFW Censor](https://github.com/w-e-w/sd-webui-nudenet-nsfw-censor), built-in **Hypertile**
75 |
76 |
77 |
78 |
79 |
80 | ## Docs:
81 | ### - [Usage of Replacer](/docs/usage.md)
82 | ### - [Video: AnimateDiff and Frame by frame](/docs/video.md)
83 | ### - [Replacer Options](/docs/options.md)
84 | ### - [Information about Replacer API](/docs/api.md)
85 | ### - [Useful tips: how to change defaults, maximal value of sliders, and how to get inpainting model](/docs/tips.md)
86 |
87 |
88 | --------------------------------
89 | ## "I want to help, how can I do it?"
90 |
91 | If you want to help with the extension, you can close one of the following tasks which I can't do:
92 |
93 | - Make a colab link to auto install (or something simmilar) https://github.com/light-and-ray/sd-webui-replacer/issues/10
94 | - Make union_inpaint preprocessor in the controlnet extension https://github.com/Mikubill/sd-webui-controlnet/issues/3035 https://github.com/light-and-ray/sd-webui-replacer/issues/89 It can make Video Replacer work with SDXL models, in theory
95 |
96 | ## Need to do (for me):
97 |
98 | - ☑️ cache mask
99 | - ☑️ batch processing
100 | - ☑️ "apply hires fix" button
101 | - ☑️ additional options
102 | - ☑️ progress bar + interrupt
103 | - ☑️ option for pass into hires fix automatically
104 | - ☑️ control net
105 | - ☑️ pass previous frame into ControlNet for video
106 | - tiled vae
107 | - ☑️ "hide segment anything extension" option
108 | - ☑️ txt2img script
109 | - more video and mask input types
110 | - RIFE frames interpolation
111 | - allow multiply instances (presets)
112 |
113 | ### small todo:
114 | - add hires fix args into metadata
115 |
--------------------------------------------------------------------------------
/apiExample.py:
--------------------------------------------------------------------------------
1 | #!/bin/python3
2 | from PIL import Image
3 | import requests, base64, io, argparse
4 |
5 | SD_WEBUI = 'http://127.0.0.1:7860'
6 |
7 | parser = argparse.ArgumentParser()
8 | parser.add_argument('filename')
9 | args = parser.parse_args()
10 |
11 | with open(args.filename, 'rb') as image_file:
12 | base64_image = base64.b64encode(image_file.read()).decode('utf-8')
13 |
14 | payload = {
15 | "input_image": base64_image,
16 | "detection_prompt": "background",
17 | "positive_prompt": "waterfall",
18 | "sam_model_name": "sam_hq_vit_h.pth",
19 | "dino_model_name": "GroundingDINO_SwinB (938MB)",
20 | "use_hires_fix": True,
21 |
22 | "scripts": { # "scripts" has the same format with "alwayson_scripts" in /sdapi/v1/txt2img
23 | "controlnet": {
24 | "args": [
25 | { # See ControlNetUnit dataclass for all possible fields
26 | "enabled": True,
27 | "module": "openpose_full",
28 | "model": "control_v11p_sd15_openpose [cab727d4]",
29 | },
30 | ]
31 | },
32 | "soft inpainting": { # Default args from UI, recommended hight "mask_blur"
33 | "args": [False, 1, 0.5, 4, 0, 1, 2]
34 | },
35 | },
36 | }
37 |
38 | response = requests.post(url=f'{SD_WEBUI}/replacer/replace', json=payload)
39 | if response.status_code == 200:
40 | response = response.json()
41 | if response['info']:
42 | print(response['info'])
43 | if response['image']:
44 | Image.open(io.BytesIO(base64.b64decode(response['image']))).show()
45 | else:
46 | print(response.json())
47 |
--------------------------------------------------------------------------------
/docs/api.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | ### API
4 | API is available on `/replacer/replace`
5 |
6 | ```python
7 | input_image: str = "base64 image"
8 | detection_prompt: str = ""
9 | avoidance_prompt: str = ""
10 | positive_prompt: str = ""
11 | negative_prompt: str = ""
12 | width: int = 512
13 | height: int = 512
14 | sam_model_name: str = sam_model_list[0] if sam_model_list else ""
15 | dino_model_name: str = dino_model_list[0]
16 | seed: int = -1
17 | sampler: str = "DPM++ 2M SDE" if IS_WEBUI_1_9 else "DPM++ 2M SDE Karras"
18 | scheduler: str = "Automatic"
19 | steps: int = 20
20 | box_threshold: float = 0.3
21 | mask_expand: int = 35
22 | mask_blur: int = 4
23 | mask_num: str = "Random"
24 | max_resolution_on_detection = 1280
25 | cfg_scale: float = 5.5
26 | denoise: float = 1.0
27 | inpaint_padding = 40
28 | inpainting_mask_invert: bool = False
29 | upscaler_for_img2img : str = ""
30 | fix_steps : bool = False
31 | inpainting_fill : int = 0
32 | sd_model_checkpoint : str = ""
33 | clip_skip: int = 1
34 | rotation_fix: str = '-' # choices: '-', '⟲', '⟳', '🗘'
35 | extra_include: list = ["mask", "box", "cut", "preview", "script"]
36 | variation_seed: int = -1
37 | variation_strength: float = 0.0
38 | integer_only_masked: bool = False
39 | forbid_too_small_crop_region: bool = True
40 | correct_aspect_ratio: bool = True
41 | avoidance_mask: str = "base64 image"
42 | custom_mask: str = "base64 image"
43 | only_custom_mask: bool = True # only if there is a custom mask
44 |
45 | use_hires_fix: bool = False
46 | hf_upscaler: str = "ESRGAN_4x"
47 | hf_steps: int = 4
48 | hf_sampler: str = "Use same sampler"
49 | hf_scheduler: str = "Use same scheduler"
50 | hf_denoise: float = 0.35
51 | hf_cfg_scale: float = 1.0
52 | hf_positive_prompt_suffix: str = ""
53 | hf_size_limit: int = 1800
54 | hf_above_limit_upscaler: str = "Lanczos"
55 | hf_unload_detection_models: bool = True
56 | hf_disable_cn: bool = True
57 | hf_extra_mask_expand: int = 5
58 | hf_positive_prompt: str = ""
59 | hf_negative_prompt: str = ""
60 | hf_sd_model_checkpoint: str = "Use same checkpoint"
61 | hf_extra_inpaint_padding: int = 250
62 | hf_extra_mask_blur: int = 2
63 | hf_randomize_seed: bool = True
64 | hf_soft_inpaint: str = "Same"
65 | hf_supersampling: float = 1.6
66 |
67 | scripts : dict = {} # ControlNet and Soft Inpainting. See apiExample.py for example
68 | ```
69 |
70 | Available options on `/replacer/available_options`
71 |
72 | http://127.0.0.1:7860/docs#/default/api_replacer_replace_replacer_replace_post
73 |
74 | See an example of usage in [apiExample.py](/apiExample.py) script
75 |
--------------------------------------------------------------------------------
/docs/images/advanced_options_detection.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/advanced_options_detection.jpg
--------------------------------------------------------------------------------
/docs/images/advanced_options_generation.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/advanced_options_generation.jpg
--------------------------------------------------------------------------------
/docs/images/advanced_options_inpainting.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/advanced_options_inpainting.jpg
--------------------------------------------------------------------------------
/docs/images/advanced_options_others.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/advanced_options_others.jpg
--------------------------------------------------------------------------------
/docs/images/controlnet.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/controlnet.jpg
--------------------------------------------------------------------------------
/docs/images/defaults.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/defaults.jpg
--------------------------------------------------------------------------------
/docs/images/hiresfix_options_advanced.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/hiresfix_options_advanced.jpg
--------------------------------------------------------------------------------
/docs/images/hiresfix_options_general.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/hiresfix_options_general.jpg
--------------------------------------------------------------------------------
/docs/images/inpaint_merge.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/inpaint_merge.jpg
--------------------------------------------------------------------------------
/docs/images/main_screenshot.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/main_screenshot.jpg
--------------------------------------------------------------------------------
/docs/images/replacer_options.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/replacer_options.jpg
--------------------------------------------------------------------------------
/docs/images/replacer_script.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/replacer_script.jpg
--------------------------------------------------------------------------------
/docs/images/replacer_video_animate_diff.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/replacer_video_animate_diff.jpg
--------------------------------------------------------------------------------
/docs/images/replacer_video_common.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/replacer_video_common.jpg
--------------------------------------------------------------------------------
/docs/images/replacer_video_frame_by_frame.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/replacer_video_frame_by_frame.jpg
--------------------------------------------------------------------------------
/docs/images/replacer_video_sparsectrl.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/replacer_video_sparsectrl.jpg
--------------------------------------------------------------------------------
/docs/images/segment_anything_options.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/light-and-ray/sd-webui-replacer/15c5134296cca2e50b4229a2ae33c61cca4cc391/docs/images/segment_anything_options.jpg
--------------------------------------------------------------------------------
/docs/options.md:
--------------------------------------------------------------------------------
1 | # Options
2 |
3 | 
4 |
5 | Prompt examples override can significantly decrease time of interactions with the extension
6 |
7 | 
8 |
9 | In Segment Anything options I recommend to enable `Use local groundingdino to bypass C++ problem` if you have C++ warnings in the console.
10 |
--------------------------------------------------------------------------------
/docs/tips.md:
--------------------------------------------------------------------------------
1 |
2 | # Useful Tips!
3 | ## How to change default values of advanced options and hires fix options?
4 |
5 | 
6 |
7 | You need to reload the web page, then set your desirable settings. Then go to the "Defaults" section in the "Settings" tab. Click "View changes", check is it ok, then click "Apply" and "Reload UI"
8 |
9 | ## How to get an inpainting model?
10 |
11 | I recommend you to using [EpicPhotoGasm - Z - Inpainting](https://civitai.com/models/132632?modelVersionId=201346) model for realism. If you've already have your favorite model, but it doesn't have inpainting model, you can make it in "Checkpoint Merger" tab:
12 | 1. Select your target model as "model B"
13 | 2. Select [sd-v1-5-inpainting](https://huggingface.co/webui/stable-diffusion-inpainting/blob/main/sd-v1-5-inpainting.safetensors) or [sd_xl_base_1.0_inpainting_0.1.safetensors](https://huggingface.co/wangqyqq/sd_xl_base_1.0_inpainting_0.1.safetensors/blob/main/sd_xl_base_1.0_inpainting_0.1.safetensors) for SDXL as "model A"
14 | 3. Select `sd_v1-5-pruned-emaonly` or [sd_xl_base_1.0_0.9vae.safetensors](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors) for SDXL as "model C"
15 | 4. Set `Custom Name` the same as your target model name (`.inpainting` suffix will be added automatically)
16 | 5. Set `Multiplier (M)` to 1.0
17 | 6. Select `Interpolation Method` to "add difference", and "Save as float16"
18 | 7. For sdxl I recommend you to select [fixed sdxl vae](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors) in baked vae dropdown
19 | 8. Merge
20 |
21 | 
22 |
23 |
24 | ## How to change maximum values of sliders?
25 |
26 | In file `ui-config.json` in root of webui you can edit maximum and minimum values of sliders
27 |
28 |
--------------------------------------------------------------------------------
/docs/usage.md:
--------------------------------------------------------------------------------
1 |
2 | # Usage
3 | ## General
4 | You just need to upload your image, enter 3 prompts, and click "Run". You can override prompts examples in Settings with your commonly using prompts. Don't forget to select inpaint checkpoint
5 |
6 | Be sure you are using inpainting model
7 |
8 | By default if a prompt is empty, it uses first prompt from examples. You can disable this behavior in settings for positive and negative prompts. Detection prompt can not be empty
9 |
10 | You can detect few objects, just using comma `,`
11 |
12 |
13 | ## Advanced options
14 |
15 | ### Generation
16 | 
17 |
18 | - _"Do exactly the number of steps the slider specifies"_: actual steps num is steps from slider * denoising straight
19 | - _"width"_, _"height"_: internal resolution on generation. 512 for sd1, 1024 for sdxl. If you increase, it will produce mutations for high denoising straight
20 | - _"Correct aspect ratio"_: Preserve original width x height number of pixels, but follow generated mask's aspect ratio. In some cases can hide necessary context
21 | - _"Upscaler for img2Img"_: which method will be used to fix the generated image inside the original image. It can be used instead hires fix. DAT upscalers are good. For example this is a good one: https://openmodeldb.info/models/4x-FaceUpDAT
22 | - _"Rotation fix"_: fixes, if your photo is rotated by 90, 180 or 270 degree and it causes artifacts in detection and generation
23 |
24 | ### Detection
25 | 
26 |
27 | - _"Mask num"_: SegmentAnything generates 3 masks for 1 image. By default, it's selected randomly by seed, but you can override it. See which mask num was used in generation info
28 |
29 | ### Inpainting
30 | 
31 |
32 | - _"Padding"_: How much context around a mask will be passed into a generation. You can see it in the live preview
33 | - _"Denoising"_: 0.0 - original image (sampler Euler a), 1.0 - completely new image. If you use a low denoising level, you need to use `Original` in masked content
34 | - _"Lama cleaner"_: remove the masked object and then pass into inpainting. From extension: https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content
35 | - _"Soft inpainting"_: can be used instead of inpainting models, or with a small `mask expand` to not change inpainting area too much. E.g. change color. You need to set high `mask blur` for it!
36 | - _"Mask mode"_: useful to invert selection (Replace everything except something). You need to set a negative `mask expand` for it.
37 |
38 | ### Others
39 | 
40 |
41 | ### Avoidance
42 | - You can draw mask or/and type prompt, which will be excluded from the mask
43 |
44 | ### Custom mask
45 | - If you want to use this extension for regular inpainting by drown mask, to take advantage of HiresFix, batch processing or controlnet inpainting, which are not able in img2img/inpaint tab of webui
46 | - Or it can be appended to generated mask if `Do not use detection prompt if use custom mask` is disabled. Opposite of avoidance mask
47 |
48 |
49 | ## HiresFix
50 | You can select the blurred image in the gallery, then press "Apply HiresFix ✨" button. Or you can enable `Pass into hires fix automatically`
51 |
52 | Default settings are designed for using lcm lora for fast upscale. It requires lcm lora I mentioned, cfg scale 1.0 and sampling steps 4. There is no difference in quality for my opinion
53 |
54 | Despite in txt2img for lcm lora DPM++ samplers produces awful results, while hires fix it produces a way better result. So I recommend "Use the same sampler" option
55 |
56 | Note: hires fix is designed for single-user server
57 |
58 | ### Options - General
59 | 
60 | - _"Extra inpaint padding"_: higher are recommended because generation size will never be higher then the original image
61 | - _"Hires supersampling"_: 1.0 is the resolution of original image's crop region, but not smaller then firstpass resolution. More then 1.0 - multiplying on this number each sides. It calculates before limiting resolution, so it still can't be bigger then you set above
62 |
63 | ### Options - Advanced
64 | 
65 | - _"Unload detection models before hires fix"_: I recommend you to disable it if you have a lot of vram. It will give significant negative impact on batches + `pass into hires fix automatically`
66 |
67 |
68 | ## Dedicated page
69 | Dedicated page (replacer tab only) is available on url `/replacer-dedicated`
70 |
71 | ## ControlNet
72 | 
73 | [ControlNet extension](https://github.com/Mikubill/sd-webui-controlnet) is also available here. (Forge is discontinued. Revert to the last compatible version: `git checkout 3321b2cec451d`)
74 |
75 | ## Replacer script in txt2img/img2img tabs
76 | 
77 |
78 | You can use it to pass generated images into replacer immediately
79 |
80 |
81 | ## Extension name
82 | Replacer" name of this extension, you can provide it inside `ExtensionName.txt` in root of extension directory.
83 |
84 | Or you can override it using the environment variable `SD_WEBUI_REPLACER_EXTENSION_NAME`
85 |
86 | For example: Linux
87 | ```sh
88 | export SD_WEBUI_REPLACER_EXTENSION_NAME="Fast Inpaint"
89 | ```
90 |
91 | Or Windows in your `.bat` file:
92 | ```bat
93 | set SD_WEBUI_REPLACER_EXTENSION_NAME="Fast Inpaint"
94 | ```
95 |
--------------------------------------------------------------------------------
/docs/video.md:
--------------------------------------------------------------------------------
1 | # Video
2 |
3 | ## Common
4 | 
5 |
6 | You need to provide a path to your video file, or url in `file://` format. On Windows you need to make right mouse click on your file with Alt key holden, and then select "copy file as path"
7 |
8 | To avoid accidental interrupting I recommend to use my extension [prevent-interruption](https://github.com/light-and-ray/sd-webui-prevent-interruption) and w-e-w's [close-confirmation-dialogue](https://github.com/w-e-w/sdwebui-close-confirmation-dialogue)
9 |
10 | ## AnimateDiff mode
11 | 
12 |
13 | ### General advice
14 | Almost all advanced options work here. Inpaint padding doesn't, because it's ControlNet inpainting. Lama cleaner in masked content enables CN inpaint_only+lama module instead of inpaint_only
15 |
16 | Due to high AnimateDiff's consistency in comparison with *"Frame by frame"* mode you can use high `mask blur` and `mask expand`.
17 |
18 | Hires fix doesn't work here, and as I think, it basically can't, because it will decrease the consistency. But you can use the "upscaler for img2img" option - these upscalers work and consistent enough.
19 |
20 | To increase consistency between fragments, you can try to use `Fragment length` = 0. It will work in 100% cases, but can preserve artifacts thought whole the video. Or you can use ControlNet, especially `SparseCtrl`. Also you can adjust `Context batch size`, `Stride`, `Overlap`. I recommend make `Fragment length` few times more then `Context batch size`
21 |
22 | `Context batch size` is set up for 12GB VRAM with one additional ControlNet unit. If you get OutOfMemory error, decrease it
23 |
24 | If you know any other good advice, please send them into github issues, I can place them here
25 |
26 | ### AnimateDiff options
27 | 1. **Number of frames** (*Fragment length, frames* inside Replacer) — Choose whatever number you like.
28 |
29 | If you enter something smaller than your `Context batch size` other than 0: you will get the first `Number of frames` frames as your output fragment from your whole generation.
30 | 1. **FPS** (*Internal AD FPS* inside Replacer)— Frames per second, which is how many frames (images) are shown every second. If 16 frames are generated at 8 frames per second, your fragment’s duration is 2 seconds.
31 |
32 | 1. **Context batch size** — How many frames will be passed into the motion module at once. The SD1.5 motion modules are trained with 16 frames, so it’ll give the best results when the number of frames is set to `16`. SDXL HotShotXL motion modules are trained with 8 frames instead. Choose [1, 24] for V1 / HotShotXL motion modules and [1, 32] for V2 / AnimateDiffXL motion modules.
33 | 1. **Stride** — Max motion stride as a power of 2 (default: 1).
34 | 1. Due to the limitation of the infinite context generator, this parameter is effective only when `Number of frames` > `Context batch size`, including when ControlNet is enabled and the source video frame number > `Context batch size` and `Number of frames` is 0.
35 | 1. "Absolutely no closed loop" is only possible when `Stride` is 1.
36 | 1. For each `1 <= 2^i <= Stride`, the infinite context generator will try to make frames `2^i` apart temporal consistent. For example, if `Stride` is 4 and `Number of frames` is 8, it will make the following frames temporal consistent:
37 | - `Stride` == 1: [0, 1, 2, 3, 4, 5, 6, 7]
38 | - `Stride` == 2: [0, 2, 4, 6], [1, 3, 5, 7]
39 | - `Stride` == 4: [0, 4], [1, 5], [2, 6], [3, 7]
40 | 1. **Overlap** — Number of frames to overlap in context. If overlap is -1 (default): your overlap will be `Context batch size` // 4.
41 | 1. Due to the limitation of the infinite context generator, this parameter is effective only when `Number of frames` > `Context batch size`, including when ControlNet is enabled and the source video frame number > `Context batch size` and `Number of frames` is 0.
42 | 1. **Latent power** and **Latent scale** — Initial latent for each AnimateDiff's frame is calculated using `init_alpha` made with this formula: `init_alpha = 1 - frame_number ^ latent_power / latent_scale`. You can see these factors in console log `AnimateDiff - INFO - Randomizing init_latent according to [ ... ]`. It describes the straight of initial image of the frames. Inside Replacer initial image is the last image of the previous fragment, or inpainted the first frame in the first fragment
43 | 1. **FreeInit** - Using FreeInit to improve temporal consistency of your videos.
44 | 1. The default parameters provide satisfactory results for most use cases.
45 | 1. Use "Gaussian" filter when your motion is intense.
46 | 1. See [original repo of Freeinit](https://github.com/TianxingWu/FreeInit) to for more parameter settings.
47 |
48 | ### SparseCtrl
49 | SparseCtrl is a special ControlNet models for AnimateDiff. Set up it in a ControlNet unit to use it. Produces much better correspondence of the first result frame of the fragment and the initial image
50 |
51 | 
52 |
53 | You can download them here: https://huggingface.co/conrevo/AnimateDiff-A1111/tree/main/control RGB is for "none" preprocessor, scribble is for scribbles
54 |
55 | ## Frame by frame mode
56 | You can use Replacer to inpaint video with an old frame by frame method with regular stable diffusion inpaint method. It is very inconsistent, but in a few cases it can produce good enough results
57 |
58 | 
59 |
60 | To increase consistency, it's better to inpaint clear objects on video with good quality and enough. Your prompts need to produce consistent results.
61 |
62 | To suppress flickering you can generate in little fps (e.g. 10), then interpolate (x2) it with ai interpolation algorithm (e.g [RIFE](https://github.com/megvii-research/ECCV2022-RIFE) or [frame interpolation in deforum sd-webui extension](https://github.com/deforum-art/sd-webui-deforum/wiki/Upscaling-and-Frame-Interpolation))
63 |
64 | You can also use [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet) or [lama-cleaner](https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content) with (low denoising) extensions to increase consistency, if it fits to your scenario
65 |
66 | Also a good can be to use `Pass the previous frame into ControlNet` with _IP-Adapter_, _Reference_, _Shuffle_, _T2IA-Color_, _T2IA-Style_
67 |
68 |
--------------------------------------------------------------------------------
/html/replacer_footer.html:
--------------------------------------------------------------------------------
1 |
12 |
13 |
14 | {versions}
15 |
16 |
--------------------------------------------------------------------------------
/install.py:
--------------------------------------------------------------------------------
1 | import launch
2 |
3 | if not launch.is_installed('imageio_ffmpeg'):
4 | launch.run_pip('install imageio_ffmpeg')
5 |
--------------------------------------------------------------------------------
/javascript/replacer.js:
--------------------------------------------------------------------------------
1 | function submit_replacer() {
2 | let arguments_ = Array.from(arguments);
3 | galleryId = arguments_.pop();
4 | buttonsId = arguments_.pop();
5 | extraShowButtonsId = arguments_.pop();
6 | showSubmitButtons(buttonsId, false);
7 |
8 | let id = randomId();
9 |
10 | requestProgress(id,
11 | gradioApp().getElementById(galleryId + "_gallery_container"),
12 | gradioApp().getElementById(galleryId + "_gallery"),
13 | function () {
14 | showSubmitButtons(buttonsId, true);
15 | if (extraShowButtonsId) {
16 | showSubmitButtons(extraShowButtonsId, true);
17 | }
18 | }
19 | );
20 |
21 | let res = create_submit_args(arguments_);
22 |
23 | res[0] = id;
24 |
25 | console.log(res);
26 | return res;
27 | }
28 |
29 |
30 |
31 | titles = {
32 | ...titles,
33 | "Max resolution on detection": "If one side of the image is smaller than that, it will be resized before detection. It doesn't have effect on inpainting. Reduces vram usage and mask generation time.",
34 | "Mask Expand": "Mask dilation, px, relative to \"Max resolution on detection\"",
35 | "Extra mask expand": "Extra mask dilation on hires fix step, px, relative to \"Max resolution on detection\"",
36 | "Limit avoidance mask canvas resolution on creating": "Limit the canvas created by the button, using \"Max resolution on detection\" option",
37 | "Limit custom mask canvas resolution on creating": "Limit the canvas created by the button, using \"Max resolution on detection\" option",
38 | "Hires supersampling": "1.0 is the resolution of original image's crop region, but not smaller then firstpass resolution. More then 1.0 - multiplying on this number each sides. It calculates before limiting resolution, so it still can't be bigger then you set above",
39 | "Correct aspect ratio": "Preserve original width x height number of pixels, but follow generated mask's aspect ratio. In some cases can hide necessary context",
40 | };
41 |
42 |
43 | let replacer_gallery = undefined;
44 | onAfterUiUpdate(function () {
45 | if (!replacer_gallery) {
46 | replacer_gallery = attachGalleryListeners("replacer");
47 | }
48 | });
49 |
50 |
51 | function replacerGetCurrentSourceImg(dummy_component, isAvoid, needLimit, maxResolutionOnDetection) {
52 | const img = gradioApp().querySelector('#replacer_image div div img');
53 | let maskId = '';
54 | if (isAvoid) {
55 | maskId = 'replacer_avoidance_mask';
56 | } else {
57 | maskId = 'replacer_custom_mask';
58 | }
59 | const removeButton = gradioApp().getElementById(maskId).querySelector('button[aria-label="Remove Image"]');
60 | if (removeButton) {
61 | removeButton.click();
62 | }
63 | let resImg = img ? img.src : null;
64 | return [resImg, isAvoid, needLimit, maxResolutionOnDetection];
65 | }
66 |
67 |
68 |
69 | async function replacer_waitForOpts() {
70 | for (; ;) {
71 | if (window.opts && Object.keys(window.opts).length) {
72 | return window.opts;
73 | }
74 | await new Promise(resolve => setTimeout(resolve, 100));
75 | }
76 | }
77 |
78 | var isReplacerZoomAndPanIntegrationApplied = false;
79 |
80 | function replacerApplyZoomAndPanIntegration() {
81 | if (typeof window.applyZoomAndPanIntegration === "function" && !isReplacerZoomAndPanIntegrationApplied) {
82 | window.applyZoomAndPanIntegration("#replacer_advanced_options", ["#replacer_avoidance_mask", "#replacer_custom_mask"]);
83 |
84 | const maskIds = [...Array(10).keys()].map(i => `#replacer_video_mask_${i + 1}`);
85 | window.applyZoomAndPanIntegration("#replacer_video_masking_tab", maskIds);
86 |
87 | isReplacerZoomAndPanIntegrationApplied = true;
88 | }
89 | }
90 |
91 | function replacerApplyZoomAndPanIntegration_withMod() {
92 | if (typeof window.applyZoomAndPanIntegration === "function" && typeof window.applyZoomAndPanIntegration_replacer_mod === "function" && !isReplacerZoomAndPanIntegrationApplied) {
93 | window.applyZoomAndPanIntegration_replacer_mod("#replacer_advanced_options", ["#replacer_avoidance_mask", "#replacer_custom_mask"]);
94 | isReplacerZoomAndPanIntegrationApplied = true;
95 | }
96 | }
97 |
98 | onUiUpdate(async () => {
99 | if (isReplacerZoomAndPanIntegrationApplied) return;
100 | const opts = await replacer_waitForOpts();
101 |
102 | if ('set_scale_by_when_changing_upscaler' in opts) { // webui 1.9
103 | replacerApplyZoomAndPanIntegration();
104 | } else {
105 | replacerApplyZoomAndPanIntegration_withMod();
106 | }
107 | });
108 |
109 |
110 | function replacerRemoveInpaintDiffMaskUpload() {
111 | const mask = gradioApp().getElementById('replacer_inpaint_diff_mask_view');
112 | if (!mask) return;
113 | let imageContainer = mask.getElementsByClassName('image-container')[0];
114 | if (!imageContainer) return;
115 | const images = imageContainer.getElementsByTagName('img');
116 |
117 | if (images.length == 0) {
118 | imageContainer.style.visibility = 'hidden';
119 | } else {
120 | imageContainer.style.visibility = 'visible';
121 | }
122 | }
123 |
124 | onUiUpdate(replacerRemoveInpaintDiffMaskUpload);
125 |
126 |
127 | function replacerRemoveVideoMaskUpload() {
128 | const maskIds = [...Array(10).keys()].map(i => `replacer_video_mask_${i + 1}`);
129 | maskIds.forEach((maskId) => {
130 | const mask = gradioApp().getElementById(maskId);
131 | if (!mask) return;
132 |
133 | const removeButton = mask.querySelector('button[title="Remove Image"]');
134 | if (removeButton) {
135 | removeButton.style.display = "none";
136 | }
137 |
138 | const imageContainer = mask.getElementsByClassName('image-container')[0];
139 | if (!imageContainer) return;
140 | const images = imageContainer.getElementsByTagName('canvas');
141 | if (images.length == 0) {
142 | imageContainer.style.visibility = 'hidden';
143 | } else {
144 | imageContainer.style.visibility = 'visible';
145 | }
146 | });
147 | }
148 |
149 | onUiUpdate(replacerRemoveVideoMaskUpload);
150 |
151 |
152 | onUiLoaded(function () {
153 | let replacer_generate = gradioApp().getElementById('replacer_generate');
154 | let replacer_hf_generate = gradioApp().getElementById('replacer_hf_generate');
155 | let replacer_video_masks_detect_generate = gradioApp().getElementById('replacer_video_masks_detect_generate');
156 | let replacer_video_gen_generate = gradioApp().getElementById('replacer_video_gen_generate');
157 | replacer_generate.title = '';
158 | replacer_hf_generate.title = '';
159 | replacer_video_masks_detect_generate.title = '';
160 | replacer_video_gen_generate.title = '';
161 | });
162 |
163 |
164 |
165 |
166 | function sendBackToReplacer() {
167 | let res = Array.from(arguments);
168 |
169 | res[1] = selected_gallery_index();
170 |
171 | return res;
172 | }
173 |
174 |
175 | function replacer_go_to_comp_tab() {
176 | const tabs = document.querySelector('#tabs').querySelector('.tab-nav').querySelectorAll('button');
177 | for (let i = 0; i < tabs.length; i++) {
178 | if (tabs[i].textContent.trim() === "Comparison") {
179 | tabs[i].click();
180 | break;
181 | }
182 | }
183 | }
184 |
185 |
186 | function replacer_imageComparisonloadImage() {
187 | let source_a = document.getElementById('replacer_image').querySelector('img');
188 | let source_b = document.getElementById('replacer_gallery').querySelector('img');
189 |
190 | if (source_a == null || source_b == null) return;
191 |
192 | ImageComparator.img_A.src = source_a.src;
193 | ImageComparator.img_B.src = source_b.src;
194 | ImageComparator.reset();
195 | }
196 |
197 |
198 | function replacer_imageComparisonAddButton() { // https://github.com/Haoming02/sd-webui-image-comparison
199 | // 0: Off ; 1: Text ; 2: Icon
200 | let option = 0;
201 | const is_dedicated = gradioApp().getElementById('replacer_image_comparison');
202 | if (is_dedicated) {
203 | option = 2;
204 | const inputs = gradioApp().getElementById('tab_sd-webui-image-comparison')?.getElementsByTagName('input');
205 | for (let i = 0; i < inputs.length; i++) {
206 | inputs[i].disabled = false;
207 | }
208 | } else {
209 | const config = gradioApp().getElementById('setting_comp_send_btn')?.querySelectorAll('label');
210 | if (!config) return;
211 | for (let i = 1; i < 3; i++) {
212 | if (config[i].classList.contains('selected')) {
213 | option = i;
214 | break;
215 | }
216 | }
217 | }
218 |
219 | if (option === 0) return;
220 |
221 | const row = gradioApp().getElementById("image_buttons_replacer").querySelector('.form');
222 | const btn = row.lastElementChild.cloneNode();
223 |
224 | btn.id = "replacer_send_to_comp";
225 | btn.title = "Send images to comparison tab.";
226 | if (btn.classList.contains("hidden")) {
227 | btn.classList.remove("hidden");
228 | }
229 | if (option === 1) {
230 | btn.textContent = "Send to Comparison";
231 | } else {
232 | btn.textContent = "🆚";
233 | }
234 |
235 | btn.addEventListener('click', () => {
236 | replacer_imageComparisonloadImage();
237 | replacer_go_to_comp_tab();
238 | });
239 | row.appendChild(btn);
240 |
241 |
242 | if (is_dedicated && gradioApp().getElementById('extras')) {
243 | const rowExtras = gradioApp().getElementById("image_buttons_extras").querySelector('.form');
244 | const btnExtras = rowExtras.lastElementChild.cloneNode();
245 | btnExtras.id = "replacer_send_to_comp";
246 | btnExtras.title = "Send images to comparison tab.";
247 | if (btnExtras.classList.contains("hidden")) {
248 | btnExtras.classList.remove("hidden");
249 | }
250 | if (option === 1) {
251 | btnExtras.textContent = "Send to Comparison";
252 | } else {
253 | btnExtras.textContent = "🆚";
254 | }
255 |
256 | btnExtras.addEventListener('click', () => {
257 | gradioApp().getElementById('img_comp_extras').click();
258 | replacer_go_to_comp_tab();
259 | });
260 | rowExtras.appendChild(btnExtras);
261 | }
262 |
263 | }
264 |
265 | onUiLoaded(replacer_imageComparisonAddButton);
266 |
267 |
268 | function closeAllVideoMasks() {
269 | const videoMasks = document.querySelectorAll('.replacer_video_mask');
270 | videoMasks.forEach((mask, index) => {
271 | const removeButton = mask.querySelector('button[title="Remove Image"]');
272 | if (removeButton) {
273 | removeButton.click();
274 | const canvases = mask.querySelectorAll('canvas');
275 | canvases.forEach((canvas) => {
276 | const ctx = canvas.getContext('2d');
277 | if (ctx) {
278 | ctx.clearRect(0, 0, canvas.width, canvas.height);
279 | }
280 | canvas.width = 0;
281 | canvas.height = 0;
282 | });
283 | const images = mask.querySelectorAll('img');
284 | images.forEach((img) => {
285 | img.src = '';
286 | img.onload = null;
287 | img.onerror = null;
288 | });
289 | }
290 | });
291 | return [...arguments];
292 | }
293 |
294 |
295 | onUiLoaded(function () {
296 | document.getElementById('replacer_video_open_folder').parentElement.style.display = 'none';
297 | });
298 |
299 |
300 |
--------------------------------------------------------------------------------
/metadata.ini:
--------------------------------------------------------------------------------
1 | # This section contains information about the extension itself.
2 | # This section is optional.
3 | [Extension]
4 |
5 | # A canonical name of the extension.
6 | # Only lowercase letters, numbers, dashes and underscores are allowed.
7 | # This is a unique identifier of the extension, and the loader will refuse to
8 | # load two extensions with the same name. If the name is not supplied, the
9 | # name of the extension directory is used. Other extensions can use this
10 | # name to refer to this extension in the file.
11 | Name = sd-webui-replacer
12 |
13 | # A comma-or-space-separated list of extensions that this extension requires
14 | # to be installed and enabled.
15 | # The loader will generate a warning if any of the extensions in this list is
16 | # not installed or disabled.
17 | Requires = sd-webui-segment-anything
18 |
19 | ; # Declaring relationships of folders
20 | ; #
21 | ; # This section declares relations of all files in `scripts` directory.
22 | ; # By changing the section name, it can also be used on other directories
23 | ; # walked by `load_scripts` function (for example `javascript` and `localization`).
24 | ; # This section is optional.
25 | ; [scripts]
26 |
27 | ; # A comma-or-space-separated list of extensions that files in this folder requires
28 | ; # to be present.
29 | ; # It is only allowed to specify an extension here.
30 | ; # The loader will generate a warning if any of the extensions in this list is
31 | ; # not installed or disabled.
32 | ; Requires = another-extension, yet-another-extension
33 |
34 | ; # A comma-or-space-separated list of extensions that files in this folder wants
35 | ; # to be loaded before.
36 | ; # It is only allowed to specify an extension here.
37 | ; # The loading order of all files in the specified folder will be moved so that
38 | ; # the files in the current extension are loaded before the files in the same
39 | ; # folder in the listed extension.
40 | ; Before = another-extension, yet-another-extension
41 |
42 | ; # A comma-or-space-separated list of extensions that files in this folder wants
43 | ; # to be loaded after.
44 | ; # Other details are the same as `Before` key.
45 | ; After = another-extension, yet-another-extension
46 |
47 | ; # Declaring relationships of a specific file
48 | ; #
49 | ; # This section declares relations of a specific file to files in the same
50 | ; # folder of other extensions.
51 | ; # By changing the section name, it can also be used on other directories
52 | ; # walked by `load_scripts` function (for example `javascript` and `localization`).
53 | ; # This section is optional.
54 | ; [scripts/another-script.py]
55 |
56 | ; # A comma-or-space-separated list of extensions/files that this file requires
57 | ; # to be present.
58 | ; # The `Requires` key in the folder section will be prepended to this list.
59 | ; # The loader will generate a warning if any of the extensions/files in this list is
60 | ; # not installed or disabled.
61 | ; # It is allowed to specify either an extension or a specific file.
62 | ; # When referencing a file, the folder name must be omitted.
63 | ; #
64 | ; # For example, the `yet-another-extension/another-script.py` item refers to
65 | ; # `scripts/another-script.py` in `yet-another-extension`.
66 | ; Requires = another-extension, yet-another-extension/another-script.py, xyz_grid.py
67 |
68 | ; # A comma-or-space-separated list of extensions that this file wants
69 | ; # to be loaded before.
70 | ; # The `Before` key in the folder section will be prepended to this list.
71 | ; # The loading order of this file will be moved so that this file is
72 | ; # loaded before the referenced file in the list.
73 | ; Before = another-extension, yet-another-extension/another-script.py, xyz_grid.py
74 |
75 | ; # A comma-or-space-separated list of extensions that this file wants
76 | ; # to be loaded after.
77 | ; # Other details are the same as `Before` key.
78 | ; After = another-extension, yet-another-extension/another-script.py, xyz_grid.py
79 |
--------------------------------------------------------------------------------
/replacer/extensions/animatediff.py:
--------------------------------------------------------------------------------
1 | import copy, os
2 | from PIL import Image
3 | from modules import scripts, shared, paths_internal, errors, extensions
4 | from replacer.generation_args import AnimateDiffArgs, GenerationArgs
5 | from replacer.tools import applyMask
6 |
7 |
8 | # --- AnimateDiff ---- https://github.com/continue-revolution/sd-webui-animatediff
9 |
10 | SCRIPT : scripts.Script = None
11 | AnimateDiffProcess = None
12 |
13 |
14 | def initAnimateDiffScript():
15 | global SCRIPT, AnimateDiffProcess
16 | index = None
17 | for idx, script in enumerate(scripts.scripts_img2img.alwayson_scripts):
18 | if script.title().lower() == "animatediff":
19 | index = idx
20 | break
21 | if index is not None:
22 | SCRIPT = copy.copy(scripts.scripts_img2img.alwayson_scripts[index])
23 | else:
24 | return
25 |
26 | if not AnimateDiffProcess:
27 | from scripts.animatediff_ui import AnimateDiffProcess
28 |
29 |
30 |
31 |
32 | def apply(p, animatediff_args: AnimateDiffArgs):
33 | global SCRIPT, AnimateDiffProcess
34 | if p.script_args[SCRIPT.args_from] is None:
35 | p.script_args[SCRIPT.args_from] = AnimateDiffProcess()
36 |
37 | if not animatediff_args or not animatediff_args.needApplyAnimateDiff:
38 | p.script_args[SCRIPT.args_from].enable = False
39 | else:
40 | params = p.script_args[SCRIPT.args_from]
41 | params.enable = True
42 | params.format = ["PNG"]
43 | params.closed_loop = 'N'
44 |
45 | params.video_length = animatediff_args.fragment_length
46 | params.fps = animatediff_args.internal_fps
47 | params.batch_size = animatediff_args.batch_size
48 | params.stride = animatediff_args.stride
49 | params.overlap = animatediff_args.overlap
50 | params.latent_power = animatediff_args.latent_power
51 | params.latent_scale = animatediff_args.latent_scale
52 | params.freeinit_enable = animatediff_args.freeinit_enable
53 | params.freeinit_filter = animatediff_args.freeinit_filter
54 | params.freeinit_ds = animatediff_args.freeinit_ds
55 | params.freeinit_dt = animatediff_args.freeinit_dt
56 | params.freeinit_iters = animatediff_args.freeinit_iters
57 | params.model = animatediff_args.motion_model
58 |
59 | params.video_path = animatediff_args.video_path
60 | params.mask_path = animatediff_args.mask_path
61 |
62 | p.script_args[SCRIPT.args_from] = params
63 |
64 |
65 | def restoreAfterCN_animatediff(gArgs: GenerationArgs, processed):
66 | def readImages(input_dir):
67 | image_list = shared.listfiles(input_dir)
68 | for filename in image_list:
69 | image = Image.open(filename).convert('RGBA')
70 | yield image
71 |
72 | newImages = []
73 | i = 0
74 | total = len(processed.all_seeds)
75 |
76 | for res, orig, mask in \
77 | zip(processed.images[:total],
78 | readImages(gArgs.animatediff_args.video_path),
79 | readImages(gArgs.animatediff_args.mask_path)
80 | ):
81 | if gArgs.upscalerForImg2Img not in [None, "None", "Lanczos", "Nearest"]:
82 | print(f"{i+1} / {total}")
83 | orig = applyMask(res, orig, mask, gArgs)
84 | newImages.append(orig)
85 | i+=1
86 |
87 | processed.images = newImages
88 |
89 |
90 |
91 | def getModels() -> list:
92 | if SCRIPT is None:
93 | return ["None"]
94 | models = []
95 | try:
96 | try:
97 | adExtension = next(x for x in extensions.extensions if "animatediff" in x.name.lower())
98 | default_model_dir = os.path.join(adExtension.path, "model")
99 | except Exception as e:
100 | errors.report(e)
101 | default_model_dir = os.path.join(paths_internal.extensions_dir, "sd-webui-animatediff", "model")
102 | model_dir = shared.opts.data.get("animatediff_model_path", default_model_dir)
103 | if not model_dir:
104 | model_dir = default_model_dir
105 | models = shared.listfiles(model_dir)
106 | models = [os.path.basename(x) for x in models]
107 | except Exception as e:
108 | errors.report(e)
109 |
110 | if models == []:
111 | return ["None"]
112 | return models
113 |
114 |
--------------------------------------------------------------------------------
/replacer/extensions/arplusplus.py:
--------------------------------------------------------------------------------
1 | import copy
2 | from modules import scripts
3 |
4 |
5 |
6 | # --- ArPlusPlus ---- https://github.com/altoiddealer/--sd-webui-ar-plusplus (maybe works with other forks)
7 |
8 | SCRIPT : scripts.Script = None
9 |
10 | def initArPlusPlusScript():
11 | global SCRIPT
12 | script_idx = None
13 | for idx, script in enumerate(scripts.scripts_img2img.alwayson_scripts):
14 | if script.title() == "Aspect Ratio picker":
15 | script_idx = idx
16 | break
17 | if script_idx is not None:
18 | SCRIPT = copy.copy(scripts.scripts_img2img.alwayson_scripts[script_idx])
19 | else:
20 | return
21 |
22 |
23 | def reinitArPlusPlusScript():
24 | global SCRIPT
25 | script_idx = None
26 | for idx, script in enumerate(scripts.scripts_img2img.alwayson_scripts):
27 | if script.title() == "Aspect Ratio picker":
28 | script_idx = idx
29 | break
30 | if script_idx is not None:
31 | SCRIPT = copy.copy(scripts.scripts_img2img.alwayson_scripts[script_idx])
32 | else:
33 | return
34 | if script_idx is not None:
35 | SCRIPT.args_from = scripts.scripts_img2img.alwayson_scripts[script_idx].args_from
36 | SCRIPT.args_to = scripts.scripts_img2img.alwayson_scripts[script_idx].args_to
37 | SCRIPT.name = scripts.scripts_img2img.alwayson_scripts[script_idx].name
38 |
39 |
--------------------------------------------------------------------------------
/replacer/extensions/background_extensions.py:
--------------------------------------------------------------------------------
1 | import copy
2 | from modules import scripts
3 |
4 |
5 | # --- Extensions which doesn't have arguments ---
6 |
7 |
8 | SCRIPTS = None
9 |
10 | def initAllBackgroundExtensions():
11 | global SCRIPTS
12 | SCRIPTS = []
13 | for script in scripts.scripts_img2img.alwayson_scripts:
14 | if script.args_from == script.args_to and script.args_from is not None:
15 | SCRIPTS.append(copy.copy(script))
16 |
17 |
18 |
19 | # --- LamaCleaner as masked content ---- https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content
20 |
21 | _lamaCleanerAvailable = None
22 |
23 | def lamaCleanerAvailable():
24 | global _lamaCleanerAvailable
25 | if _lamaCleanerAvailable is None:
26 | _lamaCleanerAvailable = "Lama-cleaner-masked-content" in (x.title() for x in scripts.scripts_img2img.alwayson_scripts)
27 | return _lamaCleanerAvailable
28 |
29 |
--------------------------------------------------------------------------------
/replacer/extensions/controlnet.py:
--------------------------------------------------------------------------------
1 | import copy
2 | import numpy as np
3 | import gradio as gr
4 | from PIL import ImageChops
5 | from modules import scripts, errors
6 | from replacer.tools import limitImageByOneDimension, applyMaskBlur, applyMask, applyRotationFix
7 | from replacer.generation_args import GenerationArgs
8 | from replacer.extensions.animatediff import restoreAfterCN_animatediff
9 |
10 |
11 |
12 | # --- ControlNet ---- https://github.com/Mikubill/sd-webui-controlnet
13 |
14 | try:
15 | from lib_controlnet import external_code
16 | IS_SD_WEBUI_FORGE = True
17 | except:
18 | external_code = None
19 | IS_SD_WEBUI_FORGE = False
20 |
21 | SCRIPT : scripts.Script = None
22 | ControlNetUiGroup = None
23 |
24 |
25 | def initCNScript():
26 | global SCRIPT, ControlNetUiGroup, external_code
27 | cnet_idx = None
28 | for idx, script in enumerate(scripts.scripts_img2img.alwayson_scripts):
29 | if script.title().lower() == "controlnet":
30 | cnet_idx = idx
31 | break
32 | if cnet_idx is not None:
33 | SCRIPT = copy.copy(scripts.scripts_img2img.alwayson_scripts[cnet_idx])
34 | else:
35 | return
36 |
37 | try:
38 | if not IS_SD_WEBUI_FORGE:
39 | from scripts.controlnet_ui.controlnet_ui_group import ControlNetUiGroup
40 | from scripts import external_code
41 | else:
42 | from lib_controlnet.controlnet_ui.controlnet_ui_group import ControlNetUiGroup
43 | except:
44 | errors.report('Cannot register ControlNetUiGroup', exc_info=True)
45 | SCRIPT = None
46 | initCNContext()
47 |
48 |
49 | def reinitCNScript():
50 | global SCRIPT
51 | if SCRIPT is None:
52 | return
53 | cnet_idx = None
54 | for idx, script in enumerate(scripts.scripts_img2img.alwayson_scripts):
55 | if script.title().lower() == "controlnet":
56 | cnet_idx = idx
57 | break
58 | if cnet_idx is not None:
59 | SCRIPT.args_from = scripts.scripts_img2img.alwayson_scripts[cnet_idx].args_from
60 | SCRIPT.args_to = scripts.scripts_img2img.alwayson_scripts[cnet_idx].args_to
61 | SCRIPT.name = scripts.scripts_img2img.alwayson_scripts[cnet_idx].name
62 |
63 |
64 | oldCNContext = None
65 | def initCNContext():
66 | global ControlNetUiGroup, oldCNContext
67 | oldCNContext = copy.copy(ControlNetUiGroup.a1111_context)
68 | ControlNetUiGroup.a1111_context.img2img_submit_button = gr.Button(visible=False)
69 |
70 | def restoreCNContext():
71 | global ControlNetUiGroup, oldCNContext
72 | if not ControlNetUiGroup:
73 | return
74 | ControlNetUiGroup.a1111_context = copy.copy(oldCNContext)
75 | ControlNetUiGroup.all_ui_groups = []
76 |
77 | g_cn_HWC3 = None
78 | def convertIntoCNImageFormat(image):
79 | global g_cn_HWC3
80 | if g_cn_HWC3 is None:
81 | from annotator.util import HWC3
82 | g_cn_HWC3 = HWC3
83 |
84 | color = g_cn_HWC3(np.asarray(image).astype(np.uint8))
85 | return color
86 |
87 |
88 | def restoreAfterCN(origImage, mask, gArgs: GenerationArgs, processed):
89 | print('Restoring images resolution after ControlNet Inpainting')
90 |
91 | if gArgs.animatediff_args and gArgs.animatediff_args.needApplyAnimateDiff:
92 | restoreAfterCN_animatediff(gArgs, processed)
93 | else:
94 | origMask = mask.convert('RGBA')
95 | origMask = applyMaskBlur(origMask, gArgs.mask_blur)
96 | upscaler = gArgs.upscalerForImg2Img
97 | if upscaler == "":
98 | upscaler = None
99 |
100 | for i in range(len(processed.all_seeds)):
101 | image = applyMask(processed.images[i], origImage, origMask, gArgs)
102 | processed.images[i] = image
103 |
104 |
105 | class UnitIsReserved(Exception):
106 | def __init__(self, unitNum: int):
107 | super().__init__(
108 | f"You have enabled ControlNet Unit {unitNum}, while it's reserved for "
109 | "AnimateDiff video inpainting. Please disable it. If you need more units, "
110 | "increase maximal number of them in Settings -> ControlNet")
111 |
112 |
113 | def enableInpaintModeForCN(gArgs: GenerationArgs, p, previousFrame):
114 | if IS_SD_WEBUI_FORGE: return
115 | global external_code
116 | gArgs.cn_args = list(gArgs.cn_args)
117 | hasInpainting = False
118 | mask = None
119 | needApplyAnimateDiff = False
120 | if gArgs.animatediff_args:
121 | needApplyAnimateDiff = gArgs.animatediff_args.needApplyAnimateDiff
122 |
123 | for i in range(len(gArgs.cn_args)):
124 | gArgs.cn_args[i] = copy.copy(external_code.to_processing_unit(gArgs.cn_args[i]))
125 |
126 | if gArgs.previous_frame_into_controlnet and f"Unit {i}" in gArgs.previous_frame_into_controlnet:
127 | if previousFrame:
128 | print(f'Passing the previous frame into CN unit {i}')
129 | previousFrame = applyRotationFix(previousFrame, gArgs.rotation_fix)
130 | gArgs.cn_args[i].image = {
131 | "image": convertIntoCNImageFormat(previousFrame),
132 | }
133 | gArgs.cn_args[i].enabled = True
134 | else:
135 | print(f'Disabling CN unit {i} for the first frame')
136 | gArgs.cn_args[i].enabled = False
137 | continue
138 |
139 | if not needApplyAnimateDiff and \
140 | 'sparsectrl' in gArgs.cn_args[i].model.lower() and \
141 | gArgs.cn_args[i].enabled:
142 | print(f'Sparsectrl was disabled in unit {i} because of non-animatediff generation')
143 | gArgs.cn_args[i].enabled = False
144 | continue
145 |
146 | if gArgs.animatediff_args and gArgs.animatediff_args.needApplyCNForAnimateDiff and i+1 == len(gArgs.cn_args):
147 | if gArgs.cn_args[i].enabled and gArgs.cn_args[i].module != 'inpaint_only':
148 | raise UnitIsReserved(i)
149 | gArgs.cn_args[i].enabled = True
150 | gArgs.cn_args[i].module = 'inpaint_only'
151 | if gArgs.inpainting_fill > 3: # lama cleaner
152 | gArgs.cn_args[i].module += "+lama"
153 | gArgs.cn_args[i].model = gArgs.animatediff_args.cn_inpainting_model
154 | gArgs.cn_args[i].weight = gArgs.animatediff_args.control_weight
155 |
156 | if not gArgs.cn_args[i].enabled:
157 | continue
158 |
159 | if not IS_SD_WEBUI_FORGE and gArgs.cn_args[i].module.startswith('inpaint_only'):
160 | hasInpainting = True
161 | if not needApplyAnimateDiff and gArgs.originalH and gArgs.originalW:
162 | p.height = gArgs.originalH
163 | p.width = gArgs.originalW
164 | if p.image_mask is not None:
165 | mask = p.image_mask
166 | if p.inpainting_mask_invert:
167 | mask = ImageChops.invert(mask)
168 | mask = applyMaskBlur(mask, p.mask_blur)
169 |
170 | print('Use cn inpaint instead of sd inpaint')
171 | image = limitImageByOneDimension(p.init_images[0], max(p.width, p.height))
172 | if not needApplyAnimateDiff:
173 | gArgs.cn_args[i].image = {
174 | "image": convertIntoCNImageFormat(image),
175 | "mask": convertIntoCNImageFormat(mask.resize(image.size)),
176 | }
177 | else:
178 | from scripts.enums import InputMode
179 | gArgs.cn_args[i].input_mode = InputMode.BATCH
180 | gArgs.cn_args[i].batch_modifiers = []
181 | p.image_mask = None
182 | p.inpaint_full_res = False
183 | p.needRestoreAfterCN = True
184 |
185 |
186 | if hasInpainting:
187 | for i in range(len(gArgs.cn_args)):
188 | gArgs.cn_args[i].inpaint_crop_input_image = False
189 | gArgs.cn_args[i].resize_mode = external_code.ResizeMode.OUTER_FIT
190 |
191 |
192 |
193 | needWatchControlNetUI = False
194 | controlNetAccordion = None
195 |
196 | def watchControlNetUI(component, **kwargs):
197 | global needWatchControlNetUI, controlNetAccordion
198 | if not needWatchControlNetUI:
199 | return
200 |
201 | elem_id = kwargs.get('elem_id', None)
202 | if elem_id is None:
203 | return
204 |
205 | if elem_id == 'controlnet':
206 | controlNetAccordion = component
207 | return
208 |
209 | if 'img2img' in elem_id:
210 | component.elem_id = elem_id.replace('img2img', 'replacer')
211 |
212 |
213 | def getInpaintModels() -> list:
214 | global external_code
215 | if external_code is None:
216 | return ["None"]
217 |
218 | result = []
219 | try:
220 | models = external_code.get_models()
221 | for model in models:
222 | if "inpaint" in model.lower():
223 | result.append(model)
224 | except Exception as e:
225 | errors.report(f"{e} ***", exc_info=True)
226 | if result == []:
227 | return ["None"]
228 | return result
229 |
230 |
--------------------------------------------------------------------------------
/replacer/extensions/image_comparison.py:
--------------------------------------------------------------------------------
1 | import importlib
2 | import gradio as gr
3 | from replacer.options import EXT_NAME, extrasInDedicated
4 |
5 |
6 |
7 | # --- ImageComparison ---- https://github.com/Haoming02/sd-webui-image-comparison
8 |
9 |
10 | def addButtonIntoComparisonTab(component, **kwargs):
11 | elem_id = kwargs.get('elem_id', None)
12 | if elem_id == 'img_comp_extras':
13 | column = component.parent
14 | with column.parent:
15 | with column:
16 | replacer_btn = gr.Button(f'Compare {EXT_NAME}', elem_id='img_comp_replacer')
17 | replacer_btn.click(None, None, None, _js='replacer_imageComparisonloadImage')
18 |
19 |
20 | needWatchImageComparison = False
21 |
22 | def watchImageComparison(component, **kwargs):
23 | global needWatchImageComparison
24 | if not needWatchImageComparison:
25 | return
26 | elem_id = kwargs.get('elem_id', None)
27 | hideButtons = ['img_comp_i2i', 'img_comp_inpaint']
28 | if not extrasInDedicated():
29 | hideButtons.append('img_comp_extras')
30 | if elem_id in hideButtons:
31 | component.visible = False
32 |
33 |
34 | ImageComparisonTab = None
35 |
36 | def preloadImageComparisonTab():
37 | global ImageComparisonTab, needWatchImageComparison
38 | try:
39 | img_comp = importlib.import_module('extensions.sd-webui-image-comparison.scripts.img_comp')
40 | except ImportError:
41 | return
42 | needWatchImageComparison = True
43 | ImageComparisonTab = img_comp.img_ui()[0]
44 |
45 | def mountImageComparisonTab():
46 | global ImageComparisonTab, needWatchImageComparison
47 | if not ImageComparisonTab:
48 | return
49 | gr.Radio(value="Off", elem_id="setting_comp_send_btn", choices=["Off", "Text", "Icon"], visible=False)
50 | gr.Textbox(elem_id="replacer_image_comparison", visible=False)
51 | interface, label, ifid = ImageComparisonTab
52 | with gr.Tab(label=label, elem_id=f"tab_{ifid}"):
53 | interface.render()
54 | needWatchImageComparison = False
55 |
56 |
--------------------------------------------------------------------------------
/replacer/extensions/inpaint_difference.py:
--------------------------------------------------------------------------------
1 | from modules import errors
2 |
3 |
4 | # --- InpaintDifference ---- https://github.com/John-WL/sd-webui-inpaint-difference
5 |
6 | Globals = None
7 | computeInpaintDifference = None
8 |
9 | def initInpaintDifference():
10 | global Globals, computeInpaintDifference
11 | try:
12 | from lib_inpaint_difference.globals import DifferenceGlobals as Globals
13 | except:
14 | Globals = None
15 | return
16 |
17 | try:
18 | from lib_inpaint_difference.mask_processing import compute_mask
19 | def computeInpaintDifference(
20 | non_altered_image_for_inpaint_diff,
21 | image,
22 | mask_blur,
23 | mask_expand,
24 | erosion_amount,
25 | inpaint_diff_threshold,
26 | inpaint_diff_contours_only,
27 | ):
28 | if image is None or non_altered_image_for_inpaint_diff is None:
29 | return None
30 | return compute_mask(
31 | non_altered_image_for_inpaint_diff.convert('RGB'),
32 | image.convert('RGB'),
33 | mask_blur,
34 | mask_expand,
35 | erosion_amount,
36 | inpaint_diff_threshold,
37 | inpaint_diff_contours_only,
38 | )
39 |
40 | except Exception as e:
41 | errors.report(f"Cannot init InpaintDifference {e}", exc_info=True)
42 | Globals = None
43 |
44 |
--------------------------------------------------------------------------------
/replacer/extensions/replacer_extensions.py:
--------------------------------------------------------------------------------
1 | import copy
2 | from modules import scripts
3 | from replacer.generation_args import GenerationArgs
4 |
5 | from replacer.extensions import controlnet
6 | from replacer.extensions import inpaint_difference
7 | from replacer.extensions import soft_inpainting
8 | from replacer.extensions import background_extensions
9 | from replacer.extensions import image_comparison
10 | from replacer.extensions import animatediff
11 | from replacer.extensions import arplusplus
12 |
13 |
14 |
15 | def initAllScripts():
16 | scripts.scripts_img2img.initialize_scripts(is_img2img=True)
17 | controlnet.initCNScript()
18 | inpaint_difference.initInpaintDifference()
19 | soft_inpainting.initSoftInpaintScript()
20 | background_extensions.initAllBackgroundExtensions()
21 | animatediff.initAnimateDiffScript()
22 | arplusplus.initArPlusPlusScript()
23 |
24 | def restoreTemporaryChangedThings():
25 | controlnet.restoreCNContext()
26 |
27 | def reinitAllScriptsAfterUICreated(*args): # for args_to and args_from
28 | controlnet.reinitCNScript()
29 | soft_inpainting.reinitSoftInpaintScript()
30 | background_extensions.initAllBackgroundExtensions()
31 | animatediff.initAnimateDiffScript()
32 | arplusplus.reinitArPlusPlusScript()
33 |
34 | def prepareScriptsArgs(scripts_args):
35 | result = []
36 | lastIndex = 0
37 |
38 | if controlnet.SCRIPT:
39 | argsLen = controlnet.SCRIPT.args_to - controlnet.SCRIPT.args_from
40 | result.append(scripts_args[lastIndex:lastIndex+argsLen])
41 | lastIndex += argsLen
42 | else:
43 | result.append([])
44 |
45 | if soft_inpainting.SCRIPT:
46 | argsLen = soft_inpainting.SCRIPT.args_to - soft_inpainting.SCRIPT.args_from
47 | result.append(scripts_args[lastIndex:lastIndex+argsLen])
48 | lastIndex += argsLen
49 | else:
50 | result.append([])
51 |
52 | return result
53 |
54 |
55 | def applyScripts(p, gArgs: GenerationArgs):
56 | needControlNet = controlnet.SCRIPT is not None and gArgs.cn_args is not None and len(gArgs.cn_args) != 0
57 | needSoftInpaint = soft_inpainting.SCRIPT is not None and gArgs.soft_inpaint_args is not None and len(gArgs.soft_inpaint_args) != 0
58 |
59 | availableScripts = []
60 | if needControlNet:
61 | availableScripts.append(controlnet.SCRIPT)
62 | if needSoftInpaint :
63 | availableScripts.append(soft_inpainting.SCRIPT)
64 | if animatediff.SCRIPT is not None:
65 | availableScripts.append(animatediff.SCRIPT)
66 |
67 | if len(availableScripts) == 0:
68 | return
69 |
70 | allArgsLen = max(x.args_to for x in availableScripts)
71 |
72 | p.scripts = copy.copy(scripts.scripts_img2img)
73 | p.scripts.alwayson_scripts = availableScripts
74 | p.scripts.alwayson_scripts.extend(background_extensions.SCRIPTS)
75 | p.script_args = [None] * allArgsLen
76 |
77 | if needControlNet:
78 | for i in range(len(gArgs.cn_args)):
79 | p.script_args[controlnet.SCRIPT.args_from + i] = gArgs.cn_args[i]
80 |
81 | if needSoftInpaint:
82 | for i in range(len(gArgs.soft_inpaint_args)):
83 | p.script_args[soft_inpainting.SCRIPT.args_from + i] = gArgs.soft_inpaint_args[i]
84 |
85 | if animatediff.SCRIPT is not None:
86 | animatediff.apply(p, gArgs.animatediff_args)
87 |
88 |
89 |
90 | def prepareScriptsArgs_api(scriptsApi : dict):
91 | cn_args = []
92 | soft_inpaint_args = []
93 |
94 | for scriptApi in scriptsApi.items():
95 | if scriptApi[0] == controlnet.SCRIPT.name:
96 | cn_args = scriptApi[1]["args"]
97 | continue
98 | if scriptApi[0] == soft_inpainting.SCRIPT.name:
99 | soft_inpaint_args = scriptApi[1]["args"]
100 | continue
101 | return [cn_args, soft_inpaint_args]
102 |
103 |
104 | def getAvailableScripts_api():
105 | result = []
106 | if controlnet.SCRIPT:
107 | result.append(controlnet.SCRIPT.name)
108 | if soft_inpainting.SCRIPT:
109 | result.append(soft_inpainting.SCRIPT.name)
110 | return result
111 |
112 |
--------------------------------------------------------------------------------
/replacer/extensions/soft_inpainting.py:
--------------------------------------------------------------------------------
1 | import copy
2 | from modules import scripts
3 |
4 |
5 | # --- SoftInpainting ----
6 |
7 | SCRIPT : scripts.Script = None
8 |
9 |
10 | def initSoftInpaintScript():
11 | global SCRIPT
12 | index = None
13 | for idx, script in enumerate(scripts.scripts_img2img.alwayson_scripts):
14 | if script.title() == "Soft Inpainting":
15 | index = idx
16 | break
17 | if index is not None:
18 | SCRIPT = copy.copy(scripts.scripts_img2img.alwayson_scripts[index])
19 |
20 |
21 | def reinitSoftInpaintScript():
22 | global SCRIPT
23 | if SCRIPT is None:
24 | return
25 | index = None
26 | for idx, script in enumerate(scripts.scripts_img2img.alwayson_scripts):
27 | if script.title() == "Soft Inpainting":
28 | index = idx
29 | break
30 | if index is not None:
31 | SCRIPT.args_from = scripts.scripts_img2img.alwayson_scripts[index].args_from
32 | SCRIPT.args_to = scripts.scripts_img2img.alwayson_scripts[index].args_to
33 | SCRIPT.name = scripts.scripts_img2img.alwayson_scripts[index].name
34 |
35 |
36 |
37 |
38 | needWatchSoftInpaintUI = False
39 |
40 |
41 | def watchSoftInpaintUI(component, **kwargs):
42 | global needWatchSoftInpaintUI
43 | if not needWatchSoftInpaintUI:
44 | return
45 |
46 | elem_id = kwargs.get('elem_id', None)
47 | if elem_id is None:
48 | return
49 |
50 | if 'soft' in elem_id:
51 | component.elem_id = elem_id.replace('soft', 'replacer_soft')
52 |
53 |
--------------------------------------------------------------------------------
/replacer/generate.py:
--------------------------------------------------------------------------------
1 | from PIL import Image
2 | import modules.shared as shared
3 | from modules.shared import opts
4 | from modules.images import save_image
5 | from modules import sd_models, errors
6 | from replacer.mask_creator import MaskResult, NothingDetectedError, createMask
7 | from replacer.generation_args import GenerationArgs, AppropriateData
8 | from replacer.options import EXT_NAME, needAutoUnloadModels
9 | from replacer.tools import clearCache, interrupted, Pause
10 | from replacer.inpaint import inpaint
11 | from replacer.hires_fix import getGenerationArgsForHiresFixPass, prepareGenerationArgsBeforeHiresFixPass
12 |
13 |
14 | class InterruptedDetection(Exception):
15 | def __init__(self):
16 | super().__init__("InterruptedDetection")
17 |
18 |
19 |
20 | def generateSingle(
21 | image : Image.Image,
22 | gArgs : GenerationArgs,
23 | savePath : str,
24 | saveSuffix : str,
25 | save_to_dirs : bool,
26 | extra_includes : list,
27 | batch_processed : list,
28 | ):
29 | if interrupted():
30 | raise InterruptedDetection()
31 |
32 | maskResult: MaskResult = createMask(image, gArgs)
33 | gArgs.mask = maskResult.mask
34 |
35 | if needAutoUnloadModels():
36 | clearCache()
37 |
38 | if interrupted():
39 | raise InterruptedDetection()
40 |
41 | if maskResult.maskPreview:
42 | shared.state.assign_current_image(maskResult.maskPreview)
43 | shared.state.textinfo = "inpainting"
44 |
45 | processed, scriptImages = inpaint(image, gArgs, savePath, saveSuffix, save_to_dirs,
46 | batch_processed)
47 |
48 | extraImages = []
49 | if "mask" in extra_includes:
50 | extraImages.append(gArgs.mask)
51 | if "box" in extra_includes and maskResult.maskBox is not None:
52 | extraImages.append(maskResult.maskBox)
53 | if "cut" in extra_includes and maskResult.maskCut is not None:
54 | extraImages.append(maskResult.maskCut)
55 | if "preview" in extra_includes and maskResult.maskPreview is not None:
56 | extraImages.append(maskResult.maskPreview)
57 | if "script" in extra_includes:
58 | extraImages.extend(scriptImages)
59 |
60 | return processed, extraImages
61 |
62 |
63 |
64 | def generate(
65 | gArgs: GenerationArgs,
66 | saveDir: str,
67 | saveToSubdirs: bool,
68 | extra_includes: list,
69 | ):
70 | restoreList = []
71 | try:
72 | Pause.paused = False
73 | shared.total_tqdm.clear()
74 | shared.state.job_count = len(gArgs.images) * gArgs.batch_count
75 | totalSteps = shared.state.job_count * gArgs.totalSteps()
76 | if gArgs.pass_into_hires_fix_automatically:
77 | hiresCount = shared.state.job_count * gArgs.batch_size
78 | totalSteps += hiresCount * gArgs.hires_fix_args.steps
79 | shared.state.job_count += hiresCount
80 | shared.total_tqdm.updateTotal(totalSteps)
81 |
82 | if not gArgs.override_sd_model or gArgs.sd_model_checkpoint is None or gArgs.sd_model_checkpoint == "":
83 | gArgs.sd_model_checkpoint = opts.sd_model_checkpoint
84 | else:
85 | shared.state.textinfo = "switching sd checkpoint"
86 | oldModel = opts.sd_model_checkpoint
87 | def restore():
88 | opts.sd_model_checkpoint = oldModel
89 | restoreList.append(restore)
90 | opts.sd_model_checkpoint = gArgs.sd_model_checkpoint
91 | sd_models.reload_model_weights()
92 |
93 |
94 | if gArgs.pass_into_hires_fix_automatically:
95 | prepareGenerationArgsBeforeHiresFixPass(gArgs)
96 |
97 | n = len(gArgs.images)
98 | processed = None
99 | allExtraImages = []
100 | batch_processed = None
101 |
102 |
103 | for idx, image in enumerate(gArgs.images):
104 | progressInfo = "generating mask"
105 | if n > 1:
106 | print(flush=True)
107 | print()
108 | print(f' [{EXT_NAME}] processing {idx+1}/{n}')
109 | progressInfo += f" {idx+1}/{n}"
110 | Pause.wait()
111 |
112 | shared.state.textinfo = progressInfo
113 | shared.state.skipped = False
114 |
115 | if interrupted():
116 | if needAutoUnloadModels():
117 | clearCache()
118 | break
119 |
120 | try:
121 | saveSuffix = ""
122 | if gArgs.pass_into_hires_fix_automatically:
123 | saveSuffix = "-before-hires-fix"
124 | saveDir_ = saveDir
125 | if gArgs.pass_into_hires_fix_automatically and not gArgs.save_before_hires_fix:
126 | saveDir_ = None
127 | lenImagesBefore = len(batch_processed.images) if batch_processed else 0
128 |
129 | processed, extraImages = generateSingle(image, gArgs, saveDir_, saveSuffix,
130 | saveToSubdirs, extra_includes, batch_processed)
131 |
132 | if saveDir and shared.opts.save_mask:
133 | save_image(gArgs.mask, saveDir, "", processed.all_seeds[0], gArgs.positivePrompt, opts.samples_format,
134 | suffix='-mask', save_to_dirs=saveToSubdirs)
135 |
136 |
137 | if gArgs.pass_into_hires_fix_automatically:
138 | hrGArgs = getGenerationArgsForHiresFixPass(gArgs)
139 | for i in range(lenImagesBefore, len(processed.images)):
140 | shared.state.textinfo = 'applying hires fix'
141 | if interrupted():
142 | break
143 | processed2, _ = inpaint(processed.images[i], hrGArgs, saveDir, "", saveToSubdirs)
144 | processed.images[i] = processed2.images[0]
145 | processed.infotexts[i] = processed2.infotexts[0]
146 |
147 | for i in range(len(processed.images) - lenImagesBefore):
148 | processed.images[lenImagesBefore+i].appropriateInputImageData = AppropriateData(idx, gArgs.mask, gArgs.seed+i)
149 |
150 | except Exception as e:
151 | if type(e) is InterruptedDetection:
152 | break
153 | print(f' [{EXT_NAME}] Exception: {e}')
154 | if type(e) is not NothingDetectedError:
155 | errors.report('***', exc_info=True)
156 | if needAutoUnloadModels():
157 | clearCache()
158 | if n == 1:
159 | raise
160 | shared.state.nextjob()
161 | continue
162 |
163 | allExtraImages += extraImages
164 | batch_processed = processed
165 |
166 | if processed is None:
167 | return None, None
168 |
169 | processed.info = processed.infotexts[0]
170 |
171 | return processed, allExtraImages
172 |
173 | finally:
174 | for restore in restoreList:
175 | restore()
176 |
177 |
178 |
179 |
180 |
181 |
182 |
--------------------------------------------------------------------------------
/replacer/generation_args.py:
--------------------------------------------------------------------------------
1 | import copy, math
2 | from dataclasses import dataclass
3 | from PIL import Image
4 |
5 |
6 | @dataclass
7 | class HiresFixArgs:
8 | upscaler: str
9 | steps: int
10 | sampler: str
11 | scheduler: str
12 | denoise: float
13 | cfg_scale: float
14 | positive_prompt_suffix: str
15 | size_limit: int
16 | above_limit_upscaler: str
17 | unload_detection_models: bool
18 | disable_cn: bool
19 | extra_mask_expand: int
20 | positive_prompt: str
21 | negative_prompt: str
22 | sd_model_checkpoint: str
23 | extra_inpaint_padding: int
24 | extra_mask_blur: int
25 | randomize_seed: bool
26 | soft_inpaint: str
27 | supersampling: float
28 |
29 | DUMMY_HIRESFIX_ARGS = HiresFixArgs("", 0, "", "", 0.0, 0.0, "", 0, "", False, True, 0, "", "", "", 0, 0, False, "Same", 1.0)
30 |
31 | @dataclass
32 | class HiresFixCacheData:
33 | upscaler: str
34 | generatedImage: Image.Image
35 | galleryIdx: int
36 |
37 |
38 | @dataclass
39 | class AppropriateData:
40 | inputImageIdx: int
41 | mask: Image.Image
42 | seed: int
43 |
44 |
45 | @dataclass
46 | class AnimateDiffArgs:
47 | fragment_length: int
48 | internal_fps: float
49 | batch_size: int
50 | stride: int
51 | overlap: int
52 | latent_power: float
53 | latent_scale: float
54 | freeinit_enable: bool
55 | freeinit_filter: str
56 | freeinit_ds: float
57 | freeinit_dt: float
58 | freeinit_iters: int
59 | generate_only_first_fragment: bool
60 |
61 | cn_inpainting_model: str
62 | control_weight: float
63 | force_override_sd_model: bool
64 | force_sd_model_checkpoint: str
65 | motion_model: str
66 |
67 | needApplyAnimateDiff: bool = False
68 | needApplyCNForAnimateDiff: bool = False
69 | video_path: str = None
70 | mask_path: str = None
71 |
72 | DUMMY_ANIMATEDIFF_ARGS = AnimateDiffArgs(0, 0, 0, 0, 0, 0, 0, False, "", 0, 0, 0, False, "", 0, False, "", "")
73 |
74 | @dataclass
75 | class GenerationArgs:
76 | positivePrompt: str
77 | negativePrompt: str
78 | detectionPrompt: str
79 | avoidancePrompt: str
80 | upscalerForImg2Img: str
81 | seed: int
82 | samModel: str
83 | grdinoModel: str
84 | boxThreshold: float
85 | maskExpand: int
86 | maxResolutionOnDetection: int
87 | steps: int
88 | sampler_name: str
89 | scheduler: str
90 | mask_blur: int
91 | inpainting_fill: int
92 | batch_count: int
93 | batch_size: int
94 | cfg_scale: float
95 | denoising_strength: float
96 | height: int
97 | width: int
98 | inpaint_full_res_padding: int
99 | img2img_fix_steps: bool
100 | inpainting_mask_invert : int
101 | images: list[Image.Image]
102 | override_sd_model: bool
103 | sd_model_checkpoint: str
104 | mask_num: int
105 | avoidance_mask: Image.Image
106 | only_custom_mask: bool
107 | custom_mask: Image.Image
108 | use_inpaint_diff: bool
109 | clip_skip: int
110 |
111 | pass_into_hires_fix_automatically: bool
112 | save_before_hires_fix: bool
113 | do_not_use_mask: bool
114 | rotation_fix: str
115 | variation_seed: int
116 | variation_strength: float
117 | integer_only_masked: bool
118 | forbid_too_small_crop_region: bool
119 | correct_aspect_ratio: bool
120 |
121 | hires_fix_args: HiresFixArgs
122 | cn_args: list
123 | soft_inpaint_args: list
124 |
125 | mask: Image.Image = None
126 | mask_num_for_metadata: int = None
127 | hiresFixCacheData: HiresFixCacheData = None
128 | addHiresFixIntoMetadata: bool = False
129 | appropriateInputImageDataList: list[AppropriateData] = None
130 | originalW = None
131 | originalH = None
132 | previous_frame_into_controlnet: list[str] = None
133 | animatediff_args: AnimateDiffArgs = None
134 |
135 | def totalSteps(self):
136 | total = min(math.ceil((self.steps-1) * (1 if self.img2img_fix_steps else self.denoising_strength) + 1), self.steps)
137 | if self.animatediff_args and self.animatediff_args.freeinit_enable:
138 | total *= self.animatediff_args.freeinit_iters
139 | return total
140 |
141 | def copy(self):
142 | gArgs = copy.copy(self)
143 | gArgs.cn_args = copy.copy(list(gArgs.cn_args))
144 | for i in range(len(gArgs.cn_args)):
145 | gArgs.cn_args[i] = gArgs.cn_args[i].copy()
146 | gArgs.animatediff_args = copy.copy(gArgs.animatediff_args)
147 | gArgs.soft_inpaint_args = copy.copy(gArgs.soft_inpaint_args)
148 | gArgs.hires_fix_args = copy.copy(gArgs.hires_fix_args)
149 | gArgs.images = copy.copy(gArgs.images)
150 | for i in range(len(gArgs.images)):
151 | gArgs.images[i] = gArgs.images[i].copy()
152 | gArgs.mask = copy.copy(gArgs.mask)
153 | gArgs.hiresFixCacheData = copy.copy(gArgs.hiresFixCacheData)
154 | if gArgs.appropriateInputImageDataList is not None:
155 | gArgs.appropriateInputImageDataList = copy.copy(gArgs.appropriateInputImageDataList)
156 | for i in range(len(gArgs.appropriateInputImageDataList)):
157 | gArgs.appropriateInputImageDataList[i] = gArgs.appropriateInputImageDataList[i].copy()
158 | return gArgs
159 |
--------------------------------------------------------------------------------
/replacer/hires_fix.py:
--------------------------------------------------------------------------------
1 | import copy
2 | from replacer.generation_args import GenerationArgs
3 | from replacer.options import getHiresFixPositivePromptSuffixExamples
4 | from replacer.tools import clearCache, generateSeed, extraMaskExpand, getActualCropRegion
5 |
6 |
7 |
8 | def prepareGenerationArgsBeforeHiresFixPass(gArgs: GenerationArgs) -> None:
9 | hf = gArgs.hires_fix_args
10 | gArgs.upscalerForImg2Img = hf.upscaler
11 | if gArgs.originalW is not None:
12 | gArgs.width = gArgs.originalW
13 | gArgs.height = gArgs.originalH
14 |
15 |
16 | def getGenerationArgsForHiresFixPass(gArgs: GenerationArgs) -> GenerationArgs:
17 | hf = gArgs.hires_fix_args
18 | if hf.positive_prompt_suffix == "":
19 | hf.positive_prompt_suffix = getHiresFixPositivePromptSuffixExamples()[0]
20 | hrGArgs = copy.copy(gArgs)
21 | hrGArgs.upscalerForImg2Img = hf.above_limit_upscaler
22 | hrGArgs.cfg_scale = hf.cfg_scale
23 | hrGArgs.denoising_strength = hf.denoise
24 | if not hf.sampler == 'Use same sampler':
25 | hrGArgs.sampler_name = hf.sampler
26 | if not hf.scheduler == 'Use same scheduler':
27 | hrGArgs.scheduler == hf.scheduler
28 | if hf.steps != 0:
29 | hrGArgs.steps = hf.steps
30 | if hf.extra_mask_expand != 0:
31 | hrGArgs.mask = extraMaskExpand(hrGArgs.mask, hf.extra_mask_expand)
32 | hrGArgs.inpainting_fill = 1 # Original
33 | hrGArgs.img2img_fix_steps = True
34 | if hf.disable_cn:
35 | hrGArgs.cn_args = None
36 | if hf.soft_inpaint != 'Same' and hrGArgs.soft_inpaint_args is not None and len(hrGArgs.soft_inpaint_args) != 0:
37 | hrGArgs.soft_inpaint_args = list(hrGArgs.soft_inpaint_args)
38 | hrGArgs.soft_inpaint_args[0] = hf.soft_inpaint == 'Enable'
39 | if hf.positive_prompt != "":
40 | hrGArgs.positivePrompt = hf.positive_prompt
41 | hrGArgs.positivePrompt = hrGArgs.positivePrompt + " " + hf.positive_prompt_suffix
42 | if hf.negative_prompt != "":
43 | hrGArgs.negativePrompt = hf.negative_prompt
44 | if hf.sd_model_checkpoint is not None and hf.sd_model_checkpoint != 'Use same model'\
45 | and hf.sd_model_checkpoint != 'Use same checkpoint' and hf.sd_model_checkpoint != "":
46 | hrGArgs.sd_model_checkpoint = hf.sd_model_checkpoint
47 | hrGArgs.inpaint_full_res_padding += hf.extra_inpaint_padding
48 | hrGArgs.mask_blur += hf.extra_mask_blur
49 | if hf.randomize_seed:
50 | hrGArgs.seed = generateSeed()
51 |
52 | if hf.unload_detection_models:
53 | clearCache()
54 |
55 | x1, y1, x2, y2 = getActualCropRegion(hrGArgs.mask, hrGArgs.inpaint_full_res_padding, gArgs.inpainting_mask_invert)
56 | width = (x2-x1)
57 | height = (y2-y1)
58 | if width < gArgs.width and height < gArgs.height:
59 | width = gArgs.width
60 | height = gArgs.height
61 | hrGArgs.width = int(width * hf.supersampling)
62 | hrGArgs.width = hrGArgs.width - hrGArgs.width%8 + 8
63 | hrGArgs.height = int(height * hf.supersampling)
64 | hrGArgs.height = hrGArgs.height - hrGArgs.height%8 + 8
65 | if hrGArgs.width > hf.size_limit:
66 | hrGArgs.width = hf.size_limit
67 | if hrGArgs.height > hf.size_limit:
68 | hrGArgs.height = hf.size_limit
69 | hrGArgs.correct_aspect_ratio = False
70 | hrGArgs.forbid_too_small_crop_region = False
71 | print(f'Hires fix resolution is {hrGArgs.width}x{hrGArgs.height}')
72 |
73 | hrGArgs.batch_count = 1
74 | hrGArgs.batch_size = 1
75 | hrGArgs.addHiresFixIntoMetadata = True
76 |
77 | return hrGArgs
78 |
--------------------------------------------------------------------------------
/replacer/inpaint.py:
--------------------------------------------------------------------------------
1 | from contextlib import closing
2 | from PIL import Image
3 | import modules.shared as shared
4 | from modules.processing import StableDiffusionProcessingImg2Img, process_images, Processed
5 | from modules.shared import opts
6 | from modules.images import save_image
7 | from modules import errors
8 | from replacer.generation_args import GenerationArgs
9 | from replacer.extensions import replacer_extensions
10 | from replacer.tools import addReplacerMetadata, limitSizeByOneDimension, applyRotationFix, removeRotationFix, getActualCropRegion
11 | from replacer.ui.tools_ui import IS_WEBUI_1_9
12 |
13 |
14 |
15 |
16 | def inpaint(
17 | image : Image.Image,
18 | gArgs : GenerationArgs,
19 | savePath : str = "",
20 | saveSuffix : str = "",
21 | save_to_dirs : bool = True,
22 | batch_processed : Processed = None
23 | ):
24 | if gArgs.correct_aspect_ratio:
25 | if gArgs.originalW is None:
26 | gArgs.originalW = gArgs.width
27 | gArgs.originalH = gArgs.height
28 | if gArgs.mask:
29 | x1, y1, x2, y2 = getActualCropRegion(gArgs.mask, gArgs.inpaint_full_res_padding, gArgs.inpainting_mask_invert)
30 | else:
31 | x1, y1, x2, y2 = 0, 0, image.width, image.height
32 | if not gArgs.forbid_too_small_crop_region or (x2-x1) > gArgs.originalW or (y2-y1) > gArgs.originalH:
33 | ratio = (x2-x1) / (y2-y1)
34 | pixels = gArgs.originalW * gArgs.originalH
35 | newW = (pixels * ratio)**0.5
36 | newW = int(newW)
37 | newW = newW - newW%8
38 | newH = (pixels / ratio)**0.5
39 | newH = int(newH)
40 | newH = newH - newH%8
41 | print(f'Aspect ratio has been corrected from {gArgs.originalW}x{gArgs.originalH} to {newW}x{newH}')
42 | gArgs.width = newW
43 | gArgs.height = newH
44 |
45 | override_settings = {}
46 | if gArgs.upscalerForImg2Img is not None and gArgs.upscalerForImg2Img != "" and gArgs.upscalerForImg2Img != "None":
47 | override_settings["upscaler_for_img2img"] = gArgs.upscalerForImg2Img
48 | if hasattr(shared.opts, "img2img_upscaler_preserve_colors"):
49 | override_settings["img2img_upscaler_preserve_colors"] = True
50 | if gArgs.sd_model_checkpoint is not None and gArgs.sd_model_checkpoint != "":
51 | override_settings["sd_model_checkpoint"] = gArgs.sd_model_checkpoint
52 | override_settings["img2img_fix_steps"] = gArgs.img2img_fix_steps
53 | override_settings["CLIP_stop_at_last_layers"] = gArgs.clip_skip
54 | if hasattr(shared.opts, 'integer_only_masked'):
55 | override_settings["integer_only_masked"] = gArgs.integer_only_masked
56 | if hasattr(shared.opts, "forbid_too_small_crop_region"):
57 | override_settings["forbid_too_small_crop_region"] = gArgs.forbid_too_small_crop_region
58 |
59 | mask = gArgs.mask
60 | if mask:
61 | mask = mask.resize(image.size).convert('L')
62 | schedulerKWargs = {"scheduler": gArgs.scheduler} if IS_WEBUI_1_9 else {}
63 |
64 | image = applyRotationFix(image, gArgs.rotation_fix)
65 | mask = applyRotationFix(mask, gArgs.rotation_fix)
66 |
67 | p = StableDiffusionProcessingImg2Img(
68 | sd_model=shared.sd_model,
69 | outpath_samples=opts.outdir_samples or opts.outdir_img2img_samples,
70 | outpath_grids=opts.outdir_grids or opts.outdir_img2img_grids,
71 | prompt=gArgs.positivePrompt,
72 | negative_prompt=gArgs.negativePrompt,
73 | styles=[],
74 | sampler_name=gArgs.sampler_name,
75 | batch_size=gArgs.batch_size,
76 | n_iter=gArgs.batch_count,
77 | steps=gArgs.steps,
78 | cfg_scale=gArgs.cfg_scale,
79 | width=gArgs.width,
80 | height=gArgs.height,
81 | init_images=[image],
82 | mask=mask,
83 | mask_blur=gArgs.mask_blur,
84 | inpainting_fill=gArgs.inpainting_fill,
85 | resize_mode=0,
86 | denoising_strength=gArgs.denoising_strength,
87 | image_cfg_scale=1.5,
88 | inpaint_full_res=True,
89 | inpaint_full_res_padding=gArgs.inpaint_full_res_padding,
90 | inpainting_mask_invert=gArgs.inpainting_mask_invert,
91 | override_settings=override_settings,
92 | do_not_save_samples=True,
93 | subseed=gArgs.variation_seed,
94 | subseed_strength=gArgs.variation_strength,
95 | **schedulerKWargs,
96 | )
97 |
98 | if gArgs.do_not_use_mask and (not gArgs.upscalerForImg2Img or gArgs.upscalerForImg2Img == "None"):
99 | p.inpaint_full_res = False
100 | p.resize_mode = 2 # resize and fill
101 | p.width, p.height = limitSizeByOneDimension(image.size, max(p.width, p.height))
102 | p.extra_generation_params["Mask blur"] = gArgs.mask_blur
103 | addReplacerMetadata(p, gArgs)
104 | p.seed = gArgs.seed
105 | p.do_not_save_grid = True
106 | try:
107 | if replacer_extensions.controlnet.SCRIPT and gArgs.cn_args is not None and len(gArgs.cn_args) != 0:
108 | previousFrame = None
109 | if batch_processed:
110 | previousFrame = batch_processed.images[-1]
111 | replacer_extensions.controlnet.enableInpaintModeForCN(gArgs, p, previousFrame)
112 | except Exception as e:
113 | if isinstance(e, replacer_extensions.controlnet.UnitIsReserved):
114 | raise
115 | errors.report(f"Error {e}", exc_info=True)
116 |
117 | replacer_extensions.applyScripts(p, gArgs)
118 |
119 |
120 |
121 | with closing(p):
122 | processed = process_images(p)
123 |
124 | needRestoreAfterCN = getattr(p, 'needRestoreAfterCN', False)
125 | if needRestoreAfterCN:
126 | replacer_extensions.controlnet.restoreAfterCN(image, mask, gArgs, processed)
127 |
128 | for i in range(len(processed.images)):
129 | processed.images[i] = removeRotationFix(processed.images[i], gArgs.rotation_fix)
130 |
131 | scriptImages = processed.images[len(processed.all_seeds):]
132 | processed.images = processed.images[:len(processed.all_seeds)]
133 | scriptImages.extend(getattr(processed, 'extra_images', []))
134 |
135 |
136 |
137 |
138 | if savePath:
139 | for i in range(len(processed.images)):
140 | additional_save_suffix = getattr(image, 'additional_save_suffix', None)
141 | suffix = saveSuffix
142 | if additional_save_suffix:
143 | suffix = additional_save_suffix + suffix
144 | save_image(processed.images[i], savePath, "", processed.all_seeds[i], gArgs.positivePrompt, opts.samples_format,
145 | info=processed.infotext(p, i), p=p, suffix=suffix, save_to_dirs=save_to_dirs)
146 |
147 | if opts.do_not_show_images:
148 | processed.images = []
149 |
150 | if batch_processed:
151 | batch_processed.images += processed.images
152 | batch_processed.infotexts += processed.infotexts
153 | processed = batch_processed
154 |
155 | return processed, scriptImages
156 |
157 |
--------------------------------------------------------------------------------
/replacer/mask_creator.py:
--------------------------------------------------------------------------------
1 | from PIL import Image, ImageOps
2 | from dataclasses import dataclass
3 | from modules import devices
4 | from replacer.extensions import replacer_extensions
5 | from replacer.generation_args import GenerationArgs
6 | from replacer.options import EXT_NAME, useCpuForDetection, useFastDilation
7 | from replacer.tools import areImagesTheSame, limitImageByOneDimension, fastMaskDilate, applyRotationFix, removeRotationFix
8 | sam_predict = None
9 | update_mask = None
10 | clear_cache = None
11 |
12 | def initSamDependencies():
13 | global sam_predict, update_mask, clear_cache
14 | if not sam_predict or not update_mask or not clear_cache:
15 | import scripts.sam
16 | sam_predict = scripts.sam.sam_predict
17 | if useFastDilation():
18 | update_mask = fastMaskDilate
19 | else:
20 | update_mask = scripts.sam.update_mask
21 | clear_cache = scripts.sam.clear_cache
22 | if useCpuForDetection():
23 | scripts.sam.sam_device = 'cpu'
24 | print('Use CPU for SAM')
25 |
26 |
27 | class NothingDetectedError(Exception):
28 | def __init__(self):
29 | super().__init__("Nothing has been detected")
30 |
31 |
32 |
33 | masksCreatorCached = None
34 |
35 |
36 | class MasksCreator:
37 | def __init__(self, detectionPrompt, avoidancePrompt, image, samModel, grdinoModel, boxThreshold,
38 | maskExpand, maxResolutionOnDetection, avoidance_mask, custom_mask, rotation_fix):
39 | self.detectionPrompt = detectionPrompt
40 | self.avoidancePrompt = avoidancePrompt
41 | self.image = image
42 | self.samModel = samModel
43 | self.grdinoModel = grdinoModel
44 | self.boxThreshold = boxThreshold
45 | self.maskExpand = maskExpand
46 | self.maxResolutionOnDetection = maxResolutionOnDetection
47 | self.avoidance_mask = avoidance_mask
48 | self.custom_mask = custom_mask
49 | self.rotation_fix = rotation_fix
50 |
51 | global masksCreatorCached
52 |
53 | if masksCreatorCached is not None and \
54 | self.detectionPrompt == masksCreatorCached.detectionPrompt and\
55 | self.avoidancePrompt == masksCreatorCached.avoidancePrompt and\
56 | self.samModel == masksCreatorCached.samModel and\
57 | self.grdinoModel == masksCreatorCached.grdinoModel and\
58 | self.boxThreshold == masksCreatorCached.boxThreshold and\
59 | self.maskExpand == masksCreatorCached.maskExpand and\
60 | self.maxResolutionOnDetection == masksCreatorCached.maxResolutionOnDetection and\
61 | self.rotation_fix == masksCreatorCached.rotation_fix and\
62 | areImagesTheSame(self.image, masksCreatorCached.image) and\
63 | areImagesTheSame(self.avoidance_mask, masksCreatorCached.avoidance_mask) and\
64 | areImagesTheSame(self.custom_mask, masksCreatorCached.custom_mask):
65 | self.previews = masksCreatorCached.previews
66 | self.masks = masksCreatorCached.masks
67 | self.cut = masksCreatorCached.cut
68 | self.boxes = masksCreatorCached.boxes
69 | print('MasksCreator restored from cache')
70 | else:
71 | restoreList = []
72 | try:
73 | if useCpuForDetection():
74 | oldDevice = devices.device
75 | def restore():
76 | devices.device = oldDevice
77 | restoreList.append(restore)
78 | devices.device = 'cpu'
79 | print('Use CPU for detection')
80 | self._createMasks()
81 | masksCreatorCached = self
82 | print('MasksCreator cached')
83 | finally:
84 | for restore in restoreList:
85 | restore()
86 |
87 |
88 | def _createMasks(self):
89 | initSamDependencies()
90 | self.previews = []
91 | self.masks = []
92 | self.cut = []
93 | self.boxes = []
94 |
95 | imageResized = limitImageByOneDimension(self.image, self.maxResolutionOnDetection)
96 | imageResized = applyRotationFix(imageResized, self.rotation_fix)
97 | if self.avoidance_mask is None:
98 | customAvoidanceMaskResized = None
99 | else:
100 | customAvoidanceMaskResized = self.avoidance_mask.resize(imageResized.size)
101 | customAvoidanceMaskResized = applyRotationFix(customAvoidanceMaskResized, self.rotation_fix)
102 | masks, samLog = sam_predict(self.samModel, imageResized, [], [], True,
103 | self.grdinoModel, self.detectionPrompt, self.boxThreshold, False, [])
104 | print(samLog)
105 | if len(masks) == 0:
106 | if self.custom_mask is not None:
107 | print(f'[{EXT_NAME}] nothing has been detected by detection prompt, but there is custom mask')
108 | self.masks = [self.custom_mask]
109 | return
110 | else:
111 | raise NothingDetectedError()
112 | boxes = [masks[0], masks[1], masks[2]]
113 | masks = [masks[3], masks[4], masks[5]]
114 |
115 | self.boxes = boxes
116 |
117 | for mask in masks:
118 | if self.maskExpand >= 0:
119 | expanded = update_mask(mask, 0, self.maskExpand, imageResized)
120 | else:
121 | mask = ImageOps.invert(mask.convert('L'))
122 | expanded = update_mask(mask, 0, -self.maskExpand, imageResized)
123 | mask = ImageOps.invert(expanded[1])
124 | expanded = update_mask(mask, 0, 0, imageResized)
125 |
126 | self.previews.append(expanded[0])
127 | self.masks.append(expanded[1])
128 | self.cut.append(expanded[2])
129 |
130 | if self.avoidancePrompt != "":
131 | detectedAvoidanceMasks, samLog = sam_predict(self.samModel, imageResized, [], [], True,
132 | self.grdinoModel, self.avoidancePrompt, self.boxThreshold, False, [])
133 | print(samLog)
134 | if len(detectedAvoidanceMasks) == 0:
135 | print(f'[{EXT_NAME}] nothing has been detected by avoidance prompt')
136 | if customAvoidanceMaskResized:
137 | avoidanceMasks = [customAvoidanceMaskResized, customAvoidanceMaskResized, customAvoidanceMaskResized]
138 | else:
139 | avoidanceMasks = None
140 | else:
141 | avoidanceMasks = [detectedAvoidanceMasks[3], detectedAvoidanceMasks[4], detectedAvoidanceMasks[5]]
142 | if customAvoidanceMaskResized is not None:
143 | for i in range(len(avoidanceMasks)):
144 | avoidanceMasks[i] = avoidanceMasks[i].convert('L')
145 | avoidanceMasks[i].paste(customAvoidanceMaskResized, customAvoidanceMaskResized)
146 | else:
147 | if customAvoidanceMaskResized:
148 | avoidanceMasks = [customAvoidanceMaskResized, customAvoidanceMaskResized, customAvoidanceMaskResized]
149 | else:
150 | avoidanceMasks = None
151 |
152 | if avoidanceMasks is not None:
153 | for i in range(len(self.masks)):
154 | maskTmp = ImageOps.invert(self.masks[i].convert('L'))
155 | whiteFilling = Image.new('L', maskTmp.size, 255)
156 | maskTmp.paste(whiteFilling, avoidanceMasks[i])
157 | self.masks[i] = ImageOps.invert(maskTmp)
158 | self.previews[i].paste(imageResized, avoidanceMasks[i])
159 | transparent = Image.new('RGBA', imageResized.size, (255, 0, 0, 0))
160 | self.cut[i].paste(transparent, avoidanceMasks[i])
161 |
162 | if self.custom_mask is not None:
163 | self.custom_mask = applyRotationFix(self.custom_mask, self.rotation_fix)
164 | for i in range(len(self.masks)):
165 | whiteFilling = Image.new('L', self.masks[i].size, 255)
166 | self.masks[i].paste(whiteFilling, self.custom_mask.resize(self.masks[i].size))
167 |
168 | for i in range(len(self.masks)):
169 | self.masks[i] = removeRotationFix(self.masks[i], self.rotation_fix)
170 | for i in range(len(self.previews)):
171 | self.previews[i] = removeRotationFix(self.previews[i], self.rotation_fix)
172 | for i in range(len(self.cut)):
173 | self.cut[i] = removeRotationFix(self.cut[i], self.rotation_fix)
174 | for i in range(len(self.boxes)):
175 | self.boxes[i] = removeRotationFix(self.boxes[i], self.rotation_fix)
176 |
177 |
178 |
179 | @dataclass
180 | class MaskResult:
181 | mask: Image.Image
182 | maskPreview: Image.Image
183 | maskCut: Image.Image
184 | maskBox: Image.Image
185 |
186 |
187 | def createMask(image: Image.Image, gArgs: GenerationArgs) -> MaskResult:
188 | maskPreview = None
189 | maskCut = None
190 | maskBox = None
191 |
192 | if gArgs.do_not_use_mask:
193 | mask = Image.new('L', image.size, 255)
194 | elif gArgs.use_inpaint_diff:
195 | mask = replacer_extensions.inpaint_difference.Globals.generated_mask.convert('L')
196 |
197 | elif gArgs.only_custom_mask and gArgs.custom_mask is not None:
198 | mask = gArgs.custom_mask
199 |
200 | else:
201 | masksCreator = MasksCreator(gArgs.detectionPrompt, gArgs.avoidancePrompt, image, gArgs.samModel,
202 | gArgs.grdinoModel, gArgs.boxThreshold, gArgs.maskExpand, gArgs.maxResolutionOnDetection,
203 | gArgs.avoidance_mask, gArgs.custom_mask, gArgs.rotation_fix)
204 |
205 | if masksCreator.previews != []:
206 | if gArgs.mask_num == 'Random':
207 | maskNum = gArgs.seed % len(masksCreator.previews)
208 | else:
209 | maskNum = int(gArgs.mask_num) - 1
210 | mask = masksCreator.masks[maskNum]
211 | gArgs.mask_num_for_metadata = maskNum + 1
212 |
213 | maskPreview = masksCreator.previews[maskNum]
214 | maskCut = masksCreator.cut[maskNum]
215 | maskBox = masksCreator.boxes[maskNum]
216 | else:
217 | mask = gArgs.custom_mask
218 |
219 | return MaskResult(mask, maskPreview, maskCut, maskBox)
220 |
--------------------------------------------------------------------------------
/replacer/tools.py:
--------------------------------------------------------------------------------
1 | import cv2, random, git, torch, os, time, urllib.parse, copy, shutil
2 | import numpy as np
3 | from PIL import ImageChops, Image, ImageColor
4 | from dataclasses import dataclass
5 | import base64
6 | from io import BytesIO
7 | import gradio as gr
8 | from modules.images import resize_image
9 | from modules import errors, shared, masking
10 | from modules.ui import versions_html
11 | from replacer.generation_args import GenerationArgs
12 | from replacer.options import useFastDilation, getMaskColorStr, EXT_ROOT_DIRECTORY, EXT_NAME
13 |
14 | try:
15 | REPLACER_VERSION = git.Repo(__file__, search_parent_directories=True).head.object.hexsha[:7]
16 | except Exception:
17 | errors.report(f"Error reading replacer git info from {__file__}", exc_info=True)
18 | REPLACER_VERSION = "None"
19 |
20 | colorfix = None
21 | try:
22 | from modules import colorfix
23 | except ImportError:
24 | try:
25 | from srmodule import colorfix
26 | except ImportError:
27 | pass
28 |
29 |
30 | def addReplacerMetadata(p, gArgs: GenerationArgs):
31 | p.extra_generation_params["Extension"] = f'sd-webui-replacer {REPLACER_VERSION}'
32 | if gArgs.detectionPrompt != '':
33 | p.extra_generation_params["Detection prompt"] = gArgs.detectionPrompt
34 | if gArgs.avoidancePrompt != '':
35 | p.extra_generation_params["Avoidance prompt"] = gArgs.avoidancePrompt
36 | p.extra_generation_params["Sam model"] = gArgs.samModel
37 | p.extra_generation_params["GrDino model"] = gArgs.grdinoModel
38 | p.extra_generation_params["Box threshold"] = gArgs.boxThreshold
39 | p.extra_generation_params["Mask expand"] = gArgs.maskExpand
40 | p.extra_generation_params["Max resolution on detection"] = gArgs.maxResolutionOnDetection
41 | if gArgs.mask_num_for_metadata is not None:
42 | p.extra_generation_params["Mask num"] = gArgs.mask_num_for_metadata
43 | if gArgs.addHiresFixIntoMetadata:
44 | pass
45 |
46 |
47 | def areImagesTheSame(image_one, image_two):
48 | if image_one is None or image_two is None:
49 | return image_one is None and image_two is None
50 | if image_one.size != image_two.size:
51 | return False
52 |
53 | diff = ImageChops.difference(image_one.convert('RGB'), image_two.convert('RGB'))
54 |
55 | if diff.getbbox():
56 | return False
57 | else:
58 | return True
59 |
60 |
61 | def limitSizeByOneDimension(size: tuple, limit: int) -> tuple:
62 | w, h = size
63 | if h > w:
64 | if h > limit:
65 | w = limit / h * w
66 | h = limit
67 | else:
68 | if w > limit:
69 | h = limit / w * h
70 | w = limit
71 |
72 | return (int(w), int(h))
73 |
74 |
75 | def limitImageByOneDimension(image: Image.Image, limit: int) -> Image.Image:
76 | if image is None:
77 | return None
78 | return image.resize(limitSizeByOneDimension(image.size, limit))
79 |
80 |
81 | def fastMaskDilate_(mask, dilation_amount):
82 | if dilation_amount == 0:
83 | return mask
84 |
85 | oldMode = mask.mode
86 | mask = np.array(mask.convert('RGB')).astype(np.int32)
87 | tensor_mask = torch.from_numpy((mask / 255).astype(np.float32)).permute(2, 0, 1).unsqueeze(0)
88 |
89 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
90 | tensor_mask = tensor_mask.to(device)
91 | kernel = torch.ones(1, 1, 3, 3).to(device)
92 |
93 | tensor_mask_r = tensor_mask[:, 0:1, :, :]
94 | tensor_mask_g = tensor_mask[:, 1:2, :, :]
95 | tensor_mask_b = tensor_mask[:, 2:3, :, :]
96 | for _ in range(dilation_amount):
97 | tensor_mask_r = (torch.nn.functional.conv2d(tensor_mask_r, kernel, padding=1) > 0).float()
98 | tensor_mask_g = (torch.nn.functional.conv2d(tensor_mask_g, kernel, padding=1) > 0).float()
99 | tensor_mask_b = (torch.nn.functional.conv2d(tensor_mask_b, kernel, padding=1) > 0).float()
100 |
101 | tensor_mask = torch.cat((tensor_mask_r, tensor_mask_g, tensor_mask_b), dim=1)
102 | dilated_mask = tensor_mask.squeeze(0).permute(1, 2, 0).cpu().numpy()
103 | dilated_mask = (dilated_mask * 255).astype(np.uint8)
104 | return Image.fromarray(dilated_mask).convert(oldMode)
105 |
106 |
107 | def makePreview(image: Image.Image, mask: Image.Image):
108 | mask = mask.convert('L')
109 | maskFilling = Image.new('RGBA', mask.size, (0, 0, 0, 0))
110 | maskFilling.paste(Image.new('RGBA', mask.size, ImageColor.getcolor(f'{getMaskColorStr()}7F', 'RGBA')), mask)
111 | preview = image.resize(mask.size)
112 | preview.paste(maskFilling, (0, 0), maskFilling)
113 | return preview
114 |
115 |
116 |
117 | def fastMaskDilate(mask, _, dilation_amount, imageResized):
118 | print("Dilation Amount: ", dilation_amount)
119 | dilated_mask = fastMaskDilate_(mask, dilation_amount // 2)
120 | preview = makePreview(imageResized, dilated_mask)
121 | cut = Image.new('RGBA', dilated_mask.size, (0, 0, 0, 0))
122 | cut.paste(imageResized, dilated_mask)
123 |
124 | return [preview, dilated_mask, cut]
125 |
126 |
127 | @dataclass
128 | class CachedExtraMaskExpand:
129 | mask: Image.Image
130 | expand: int
131 | result: Image.Image
132 |
133 | cachedExtraMaskExpand: CachedExtraMaskExpand = None
134 | update_mask = None
135 |
136 |
137 | def extraMaskExpand(mask: Image.Image, expand: int):
138 | global cachedExtraMaskExpand, update_mask
139 |
140 | if cachedExtraMaskExpand is not None and\
141 | cachedExtraMaskExpand.expand == expand and\
142 | areImagesTheSame(cachedExtraMaskExpand.mask, mask):
143 | print('extraMaskExpand restored from cache')
144 | return cachedExtraMaskExpand.result
145 | else:
146 | if update_mask is None:
147 | if useFastDilation():
148 | update_mask = fastMaskDilate
149 | else:
150 | from scripts.sam import update_mask as update_mask_
151 | update_mask = update_mask_
152 | expandedMask = update_mask(mask, 0, expand, mask.convert('RGBA'))[1]
153 | cachedExtraMaskExpand = CachedExtraMaskExpand(mask, expand, expandedMask)
154 | print('extraMaskExpand cached')
155 | return expandedMask
156 |
157 |
158 | def prepareMask(mask_mode, mask_raw):
159 | if mask_mode is None or mask_raw is None:
160 | return None
161 | mask = None
162 | if 'Upload mask' in mask_mode:
163 | mask = mask_raw['image'].convert('L')
164 | if 'Draw mask' in mask_mode:
165 | mask = Image.new('L', mask_raw['mask'].size, 0) if mask is None else mask
166 | draw_mask = mask_raw['mask'].convert('L')
167 | mask.paste(draw_mask, draw_mask)
168 | blackFilling = Image.new('L', mask.size, 0)
169 | if areImagesTheSame(blackFilling, mask):
170 | return None
171 | return mask
172 |
173 |
174 | def applyMaskBlur(image_mask, mask_blur):
175 | originalMode = image_mask.mode
176 | if mask_blur > 0:
177 | np_mask = np.array(image_mask).astype(np.uint8)
178 | kernel_size = 2 * int(2.5 * mask_blur + 0.5) + 1
179 | np_mask = cv2.GaussianBlur(np_mask, (kernel_size, kernel_size), mask_blur)
180 | image_mask = Image.fromarray(np_mask).convert(originalMode)
181 | return image_mask
182 |
183 |
184 | def applyMask(res, orig, mask, gArgs):
185 | upscaler = gArgs.upscalerForImg2Img
186 | if upscaler == "":
187 | upscaler = None
188 |
189 | w, h = orig.size
190 |
191 | imageProc = resize_image(1, res.convert('RGB'), w, h, upscaler).convert('RGBA') # 1 - resize and crop
192 |
193 | mask = mask.convert('L')
194 | if gArgs.inpainting_mask_invert:
195 | mask = ImageChops.invert(mask)
196 | mask = mask.resize(orig.size)
197 | new = copy.copy(orig)
198 | new.paste(imageProc, mask)
199 | return new
200 |
201 |
202 | def generateSeed():
203 | return int(random.randrange(4294967294))
204 |
205 |
206 |
207 |
208 | def getReplacerFooter():
209 | footer = ""
210 | try:
211 | with open(os.path.join(EXT_ROOT_DIRECTORY, 'html', 'replacer_footer.html'), encoding="utf8") as file:
212 | footer = file.read()
213 | footer = footer.format(versions=versions_html()
214 | .replace('checkpoint: N/A',
215 | f'replacer: {REPLACER_VERSION}'))
216 | except Exception as e:
217 | errors.report(f"Error getReplacerFooter: {e}", exc_info=True)
218 | return ""
219 | return footer
220 |
221 |
222 | def interrupted():
223 | return shared.state.interrupted or getattr(shared.state, 'stopping_generation', False)
224 |
225 |
226 | g_clear_cache = None
227 | def clearCache():
228 | global g_clear_cache
229 | if g_clear_cache is None:
230 | from scripts.sam import clear_cache
231 | g_clear_cache = clear_cache
232 | g_clear_cache()
233 |
234 |
235 | class Pause:
236 | paused = False
237 |
238 | @staticmethod
239 | def toggle():
240 | if shared.state.job == '':
241 | return
242 | Pause.paused = not Pause.paused
243 | text = f" [{EXT_NAME}]: "
244 | text += "Paused" if Pause.paused else "Resumed"
245 | text += " generation"
246 | gr.Info(text)
247 | print(text)
248 | if Pause.paused:
249 | shared.state.textinfo = "will be paused"
250 |
251 | @staticmethod
252 | def wait():
253 | if not Pause.paused:
254 | return
255 |
256 | print(f" [{EXT_NAME}] paused")
257 | while Pause.paused:
258 | shared.state.textinfo = "paused"
259 | time.sleep(0.2)
260 | if interrupted():
261 | return
262 |
263 | print(f" [{EXT_NAME}] resumed")
264 | shared.state.textinfo = "resumed"
265 |
266 |
267 | def convertIntoPath(string: str) -> str:
268 | string = string.strip()
269 | if not string: return string
270 | if len(string) > 3 and string[0] == string[-1] and string[0] in ('"', "'"):
271 | string = string[1:-1]
272 |
273 | schemes = ['file', 'fish']
274 | prefixes = [f'{x}://' for x in schemes]
275 | isURL = any(string.startswith(x) for x in prefixes)
276 |
277 | if not isURL:
278 | return string
279 | else:
280 | for prefix in prefixes:
281 | if string.startswith(prefix):
282 | string = urllib.parse.unquote(string.removeprefix(prefix))
283 | string = string.removeprefix(string.split('/')[0]) # removes user:password@host:port if exists
284 | return string
285 |
286 | errors.report("Can't be here")
287 | return string
288 |
289 |
290 | def applyRotationFix(image: Image.Image, fix: str) -> Image.Image:
291 | if image is None:
292 | return None
293 | if fix == '-' or not fix:
294 | return image
295 | if fix == '⟲':
296 | return image.transpose(Image.ROTATE_90)
297 | if fix == '🗘':
298 | return image.transpose(Image.ROTATE_180)
299 | if fix == '⟳':
300 | return image.transpose(Image.ROTATE_270)
301 |
302 | def removeRotationFix(image: Image.Image, fix: str) -> Image.Image:
303 | if image is None:
304 | return None
305 | if fix == '-' or not fix:
306 | return image
307 | if fix == '⟲':
308 | return image.transpose(Image.ROTATE_270)
309 | if fix == '🗘':
310 | return image.transpose(Image.ROTATE_180)
311 | if fix == '⟳':
312 | return image.transpose(Image.ROTATE_90)
313 |
314 |
315 | def getActualCropRegion(mask: Image.Image, padding: int, invert: bool):
316 | if invert:
317 | mask = ImageChops.invert(mask)
318 | if hasattr(masking, 'get_crop_region_v2'):
319 | crop_region = masking.get_crop_region_v2(mask, padding)
320 | else:
321 | crop_region = masking.get_crop_region(mask, padding)
322 |
323 | return crop_region
324 |
325 |
326 | def pil_to_base64_jpeg(pil_image):
327 | if not pil_image:
328 | return ""
329 | buffer = BytesIO()
330 | pil_image.convert('RGB').save(buffer, format="JPEG")
331 | buffer.seek(0)
332 | img_str = base64.b64encode(buffer.read()).decode("utf-8")
333 | base64_str = "data:image/jpeg;base64," + img_str
334 | return base64_str
335 |
336 |
337 | def copyOrHardLink(source: str, target: str) -> None:
338 | if os.name == 'nt':
339 | shutil.copy(source, target)
340 | else:
341 | try:
342 | os.link(source, target)
343 | except Exception as e:
344 | shutil.copy(source, target)
345 |
346 |
--------------------------------------------------------------------------------
/replacer/ui/apply_hires_fix.py:
--------------------------------------------------------------------------------
1 | import copy, json
2 | from PIL import Image
3 | import modules.shared as shared
4 | from modules.ui import plaintext_to_html
5 | from modules import errors
6 | from replacer.generation_args import HiresFixCacheData, HiresFixArgs
7 | from replacer.options import getSaveDir
8 | from replacer.tools import interrupted
9 | from replacer.ui import generate_ui
10 | from replacer.inpaint import inpaint
11 | from replacer.hires_fix import getGenerationArgsForHiresFixPass, prepareGenerationArgsBeforeHiresFixPass
12 |
13 |
14 |
15 |
16 | def applyHiresFix(
17 | id_task,
18 | gallery_idx,
19 | gallery,
20 | generation_info,
21 | hf_upscaler,
22 | hf_steps,
23 | hf_sampler,
24 | hf_scheduler,
25 | hf_denoise,
26 | hf_cfg_scale,
27 | hf_positive_prompt_suffix,
28 | hf_size_limit,
29 | hf_above_limit_upscaler,
30 | hf_unload_detection_models,
31 | hf_disable_cn,
32 | hf_extra_mask_expand,
33 | hf_positive_prompt,
34 | hf_negative_prompt,
35 | hf_sd_model_checkpoint,
36 | hf_extra_inpaint_padding,
37 | hf_extra_mask_blur,
38 | hf_randomize_seed,
39 | hf_soft_inpaint,
40 | hf_supersampling,
41 | ):
42 | original_gallery = []
43 | for image in gallery:
44 | fake_image = Image.new(mode="RGB", size=(1, 1))
45 | fake_image.already_saved_as = image["name"].rsplit('?', 1)[0]
46 | original_gallery.append(fake_image)
47 |
48 | if generate_ui.lastGenerationArgs is None:
49 | return original_gallery, generation_info, plaintext_to_html("no last generation data"), ""
50 |
51 | gArgs = copy.copy(generate_ui.lastGenerationArgs)
52 | hires_fix_args = HiresFixArgs(
53 | upscaler = hf_upscaler,
54 | steps = hf_steps,
55 | sampler = hf_sampler,
56 | scheduler = hf_scheduler,
57 | denoise = hf_denoise,
58 | cfg_scale = hf_cfg_scale,
59 | positive_prompt_suffix = hf_positive_prompt_suffix,
60 | size_limit = hf_size_limit,
61 | above_limit_upscaler = hf_above_limit_upscaler,
62 | unload_detection_models = hf_unload_detection_models,
63 | disable_cn = hf_disable_cn,
64 | extra_mask_expand = hf_extra_mask_expand,
65 | positive_prompt = hf_positive_prompt,
66 | negative_prompt = hf_negative_prompt,
67 | sd_model_checkpoint = hf_sd_model_checkpoint,
68 | extra_inpaint_padding = hf_extra_inpaint_padding,
69 | extra_mask_blur = hf_extra_mask_blur,
70 | randomize_seed = hf_randomize_seed,
71 | soft_inpaint = hf_soft_inpaint,
72 | supersampling = hf_supersampling,
73 | )
74 |
75 | if len(gArgs.appropriateInputImageDataList) == 1:
76 | gallery_idx = 0
77 | if gallery_idx < 0:
78 | return original_gallery, generation_info, plaintext_to_html("Image for hires fix is not selected"), ""
79 | if gallery_idx >= len(gArgs.appropriateInputImageDataList):
80 | return original_gallery, generation_info, plaintext_to_html("Cannot apply hires fix for extra included images"), ""
81 | inputImageIdx = gArgs.appropriateInputImageDataList[gallery_idx].inputImageIdx
82 | image = gArgs.images[inputImageIdx]
83 | gArgs.mask = gArgs.appropriateInputImageDataList[gallery_idx].mask
84 | gArgs.seed = gArgs.appropriateInputImageDataList[gallery_idx].seed
85 | gArgs.hires_fix_args = hires_fix_args
86 | gArgs.pass_into_hires_fix_automatically = False
87 | gArgs.batch_count = 1
88 | gArgs.batch_size = 1
89 |
90 | prepareGenerationArgsBeforeHiresFixPass(gArgs)
91 | hrGArgs = getGenerationArgsForHiresFixPass(gArgs)
92 |
93 | try:
94 | shared.total_tqdm.clear()
95 |
96 | if generate_ui.lastGenerationArgs.hiresFixCacheData is not None and\
97 | generate_ui.lastGenerationArgs.hiresFixCacheData.upscaler == hf_upscaler and\
98 | generate_ui.lastGenerationArgs.hiresFixCacheData.galleryIdx == gallery_idx:
99 | generatedImage = generate_ui.lastGenerationArgs.hiresFixCacheData.generatedImage
100 | print('hiresFixCacheData restored from cache')
101 | shared.state.job_count = 1
102 | shared.total_tqdm.updateTotal(hrGArgs.steps)
103 | else:
104 | shared.state.job_count = 2
105 | shared.total_tqdm.updateTotal(gArgs.steps + hrGArgs.steps)
106 | shared.state.textinfo = "inpainting with upscaler"
107 |
108 | processed, scriptImages = inpaint(image, gArgs)
109 | generatedImage = processed.images[0]
110 | if not interrupted() and not shared.state.skipped:
111 | generate_ui.lastGenerationArgs.hiresFixCacheData = HiresFixCacheData(hf_upscaler, generatedImage, gallery_idx)
112 | print('hiresFixCacheData cached')
113 |
114 |
115 | shared.state.textinfo = "applying hires fix"
116 | processed, scriptImages = inpaint(generatedImage, hrGArgs, getSaveDir(), "-hires-fix")
117 |
118 | shared.state.end()
119 | if interrupted():
120 | raise Exception("Interrupted")
121 |
122 | except Exception as e:
123 | text = f"Error while processing hires fix: {e}"
124 | errors.report(text, exc_info=True)
125 | return original_gallery, generation_info, plaintext_to_html(text), ""
126 |
127 | if not gallery or not generation_info:
128 | return processed.images, processed.js(), plaintext_to_html(processed.info), plaintext_to_html(processed.comments, classname="comments")
129 |
130 | new_gallery = []
131 | geninfo = json.loads(generation_info)
132 | for i, image in enumerate(gallery):
133 | if i == gallery_idx:
134 | geninfo["infotexts"][gallery_idx: gallery_idx+1] = processed.infotexts
135 | new_gallery.append(processed.images[0])
136 | else:
137 | fake_image = Image.new(mode="RGB", size=(1, 1))
138 | fake_image.already_saved_as = image["name"].rsplit('?', 1)[0]
139 | new_gallery.append(fake_image)
140 |
141 | geninfo["infotexts"][gallery_idx] = processed.info
142 |
143 | return new_gallery, json.dumps(geninfo), plaintext_to_html(processed.info), plaintext_to_html(processed.comments, classname="comments")
144 |
--------------------------------------------------------------------------------
/replacer/ui/generate_ui.py:
--------------------------------------------------------------------------------
1 | import os, datetime, copy
2 | from PIL import Image
3 | import modules.shared as shared
4 | from modules.ui import plaintext_to_html
5 | from replacer.generation_args import GenerationArgs, HiresFixArgs, HiresFixCacheData
6 | from replacer.options import getSaveDir
7 | from replacer.extensions import replacer_extensions
8 | from replacer.tools import prepareMask, generateSeed, convertIntoPath
9 | from replacer.ui.tools_ui import prepareExpectedUIBehavior
10 | from replacer.generate import generate
11 |
12 |
13 | lastGenerationArgs: GenerationArgs = None
14 |
15 | def getLastUsedSeed():
16 | if lastGenerationArgs is None:
17 | return -1
18 | else:
19 | return lastGenerationArgs.seed
20 |
21 |
22 | def getLastUsedVariationSeed():
23 | if lastGenerationArgs is None:
24 | return -1
25 | else:
26 | return lastGenerationArgs.variation_seed
27 |
28 |
29 | def getLastUsedMaskNum():
30 | if lastGenerationArgs is None or not lastGenerationArgs.mask_num_for_metadata:
31 | return "Random"
32 | else:
33 | return str(lastGenerationArgs.mask_num_for_metadata)
34 |
35 |
36 | def generate_ui_(
37 | id_task,
38 | selected_input_mode: str,
39 | detectionPrompt: str,
40 | avoidancePrompt: str,
41 | positivePrompt: str,
42 | negativePrompt: str,
43 | image_single,
44 | image_batch,
45 | keep_original_filenames,
46 | input_batch_dir: str,
47 | output_batch_dir: str,
48 | keep_original_filenames_from_dir,
49 | show_batch_dir_results,
50 | upscalerForImg2Img,
51 | seed,
52 | sampler,
53 | scheduler,
54 | steps,
55 | box_threshold,
56 | mask_expand,
57 | mask_blur,
58 | max_resolution_on_detection,
59 | sam_model_name,
60 | dino_model_name,
61 | cfg_scale,
62 | denoise,
63 | inpaint_padding,
64 | inpainting_fill,
65 | width,
66 | height,
67 | batch_count,
68 | batch_size,
69 | inpainting_mask_invert,
70 | extra_includes,
71 | fix_steps,
72 | override_sd_model,
73 | sd_model_checkpoint,
74 | mask_num,
75 | avoidance_mask_mode,
76 | avoidance_mask,
77 | only_custom_mask,
78 | custom_mask_mode,
79 | custom_mask,
80 | use_inpaint_diff,
81 | inpaint_diff_mask_view,
82 | clip_skip,
83 | pass_into_hires_fix_automatically,
84 | save_before_hires_fix,
85 | do_not_use_mask,
86 | rotation_fix: str,
87 | variation_seed: int,
88 | variation_strength: float,
89 | integer_only_masked: bool,
90 | forbid_too_small_crop_region: bool,
91 | correct_aspect_ratio: bool,
92 |
93 | hf_upscaler,
94 | hf_steps,
95 | hf_sampler,
96 | hf_scheduler,
97 | hf_denoise,
98 | hf_cfg_scale,
99 | hf_positive_prompt_suffix,
100 | hf_size_limit,
101 | hf_above_limit_upscaler,
102 | hf_unload_detection_models,
103 | hf_disable_cn,
104 | hf_extra_mask_expand,
105 | hf_positive_prompt,
106 | hf_negative_prompt,
107 | hf_sd_model_checkpoint,
108 | hf_extra_inpaint_padding,
109 | hf_extra_mask_blur,
110 | hf_randomize_seed,
111 | hf_soft_inpaint,
112 | hf_supersampling,
113 |
114 | *scripts_args,
115 | ):
116 | if (seed == -1):
117 | seed = generateSeed()
118 |
119 | input_batch_dir = convertIntoPath(input_batch_dir)
120 | output_batch_dir = convertIntoPath(output_batch_dir)
121 |
122 | images = []
123 |
124 | if selected_input_mode == "tab_single":
125 | if image_single is not None:
126 | images = [image_single]
127 |
128 | if selected_input_mode == "tab_batch":
129 | def getImages(image_folder):
130 | for img in image_folder:
131 | if isinstance(img, Image.Image):
132 | image = img
133 | else:
134 | filename = os.path.abspath(img.name)
135 | image = Image.open(filename).convert('RGBA')
136 | if keep_original_filenames:
137 | image.additional_save_suffix = '-' + os.path.basename(filename)
138 | yield image
139 | if image_batch is not None:
140 | images = getImages(image_batch)
141 |
142 | if selected_input_mode == "tab_batch_dir":
143 | assert not shared.cmd_opts.hide_ui_dir_config, '--hide-ui-dir-config option must be disabled'
144 | def readImages(input_dir):
145 | if hasattr(shared, 'walk_image_files'): # webui 1.10
146 | image_list = shared.walk_image_files(input_dir)
147 | else:
148 | image_list = shared.listfiles(input_dir)
149 |
150 | for filename in image_list:
151 | try:
152 | image = Image.open(filename).convert('RGBA')
153 | if keep_original_filenames_from_dir:
154 | image.additional_save_suffix = '-' + os.path.basename(filename)
155 | except Exception:
156 | continue
157 | yield image
158 | images = readImages(input_batch_dir)
159 |
160 |
161 | images = list(images)
162 |
163 | if len(images) == 0:
164 | return [], "", plaintext_to_html("no input images"), ""
165 |
166 | cn_args, soft_inpaint_args = replacer_extensions.prepareScriptsArgs(scripts_args)
167 |
168 | hires_fix_args = HiresFixArgs(
169 | upscaler = hf_upscaler,
170 | steps = hf_steps,
171 | sampler = hf_sampler,
172 | scheduler = hf_scheduler,
173 | denoise = hf_denoise,
174 | cfg_scale = hf_cfg_scale,
175 | positive_prompt_suffix = hf_positive_prompt_suffix,
176 | size_limit = hf_size_limit,
177 | above_limit_upscaler = hf_above_limit_upscaler,
178 | unload_detection_models = hf_unload_detection_models,
179 | disable_cn = hf_disable_cn,
180 | extra_mask_expand = hf_extra_mask_expand,
181 | positive_prompt = hf_positive_prompt,
182 | negative_prompt = hf_negative_prompt,
183 | sd_model_checkpoint = hf_sd_model_checkpoint,
184 | extra_inpaint_padding = hf_extra_inpaint_padding,
185 | extra_mask_blur = hf_extra_mask_blur,
186 | randomize_seed = hf_randomize_seed,
187 | soft_inpaint = hf_soft_inpaint,
188 | supersampling = hf_supersampling,
189 | )
190 |
191 | gArgs = GenerationArgs(
192 | positivePrompt=positivePrompt,
193 | negativePrompt=negativePrompt,
194 | detectionPrompt=detectionPrompt,
195 | avoidancePrompt=avoidancePrompt,
196 | upscalerForImg2Img=upscalerForImg2Img,
197 | seed=seed,
198 | samModel=sam_model_name,
199 | grdinoModel=dino_model_name,
200 | boxThreshold=box_threshold,
201 | maskExpand=mask_expand,
202 | maxResolutionOnDetection=max_resolution_on_detection,
203 |
204 | steps=steps,
205 | sampler_name=sampler,
206 | scheduler=scheduler,
207 | mask_blur=mask_blur,
208 | inpainting_fill=inpainting_fill,
209 | batch_count=batch_count,
210 | batch_size=batch_size,
211 | cfg_scale=cfg_scale,
212 | denoising_strength=denoise,
213 | height=height,
214 | width=width,
215 | inpaint_full_res_padding=inpaint_padding,
216 | img2img_fix_steps=fix_steps,
217 | inpainting_mask_invert=inpainting_mask_invert,
218 |
219 | images=images,
220 | override_sd_model=override_sd_model,
221 | sd_model_checkpoint=sd_model_checkpoint,
222 | mask_num=mask_num,
223 | avoidance_mask=prepareMask(avoidance_mask_mode, avoidance_mask),
224 | only_custom_mask=only_custom_mask,
225 | custom_mask=prepareMask(custom_mask_mode, custom_mask),
226 | use_inpaint_diff=use_inpaint_diff and inpaint_diff_mask_view is not None and \
227 | replacer_extensions.inpaint_difference.Globals is not None and \
228 | replacer_extensions.inpaint_difference.Globals.generated_mask is not None,
229 | clip_skip=clip_skip,
230 | pass_into_hires_fix_automatically=pass_into_hires_fix_automatically,
231 | save_before_hires_fix=save_before_hires_fix,
232 | do_not_use_mask=do_not_use_mask,
233 | rotation_fix=rotation_fix,
234 | variation_seed=variation_seed,
235 | variation_strength=variation_strength,
236 | integer_only_masked=integer_only_masked,
237 | forbid_too_small_crop_region=forbid_too_small_crop_region,
238 | correct_aspect_ratio=correct_aspect_ratio,
239 |
240 | hires_fix_args=hires_fix_args,
241 | cn_args=cn_args,
242 | soft_inpaint_args=soft_inpaint_args,
243 | )
244 | prepareExpectedUIBehavior(gArgs)
245 |
246 |
247 |
248 | saveDir = getSaveDir()
249 | saveToSubdirs = shared.opts.save_to_dirs
250 | if selected_input_mode == "tab_batch_dir":
251 | if output_batch_dir != "":
252 | saveDir = output_batch_dir
253 | else:
254 | saveDir = os.path.join(input_batch_dir, 'out')
255 | saveToSubdirs = False
256 |
257 |
258 | processed, allExtraImages = generate(gArgs, saveDir, saveToSubdirs, extra_includes)
259 | if processed is None or not getattr(processed, 'images', None):
260 | return [], "", plaintext_to_html(f"No one image was processed. See console logs for exceptions"), ""
261 |
262 | global lastGenerationArgs
263 | gArgs.appropriateInputImageDataList = [x.appropriateInputImageData for x in processed.images]
264 | lastGenerationArgs = gArgs
265 | lastGenerationArgs.hiresFixCacheData = HiresFixCacheData(gArgs.upscalerForImg2Img, processed.images[0], 0)
266 |
267 |
268 | if selected_input_mode == "tab_batch_dir" and not show_batch_dir_results:
269 | return [], "", plaintext_to_html(f"Saved into {saveDir}"), ""
270 |
271 | processed.images += allExtraImages
272 | processed.infotexts += [processed.info] * len(allExtraImages)
273 |
274 | return processed.images, processed.js(), plaintext_to_html(processed.info), plaintext_to_html(processed.comments, classname="comments")
275 |
276 |
277 | def generate_ui(*args, **kwargs):
278 | return generate_ui_(*args, **kwargs)
279 |
280 |
281 |
282 |
--------------------------------------------------------------------------------
/replacer/ui/make_hiresfix_options.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 | import modules
3 | from modules import shared, sd_samplers
4 | from modules.ui_common import create_refresh_button
5 | from replacer.options import getHiresFixPositivePromptSuffixExamples, doNotShowUnloadButton
6 | from replacer.extensions import replacer_extensions
7 | from replacer.ui.tools_ui import IS_WEBUI_1_5, IS_WEBUI_1_9, AttrDict
8 |
9 |
10 |
11 | def getHiresFixCheckpoints():
12 | if IS_WEBUI_1_5:
13 | return ["Use same checkpoint"] + modules.sd_models.checkpoint_tiles()
14 | else:
15 | return ["Use same checkpoint"] + modules.sd_models.checkpoint_tiles(use_short=False)
16 |
17 |
18 |
19 | def makeHiresFixOptions(comp: AttrDict):
20 | with gr.Accordion("HiresFix options", open=False):
21 | with gr.Tabs():
22 | with gr.Tab('General'):
23 | with gr.Row():
24 | comp.hf_upscaler = gr.Dropdown(
25 | value="ESRGAN_4x",
26 | choices=[x.name for x in shared.sd_upscalers],
27 | label="Upscaler",
28 | )
29 |
30 | comp.hf_steps = gr.Slider(
31 | label='Hires steps',
32 | value=4,
33 | step=1,
34 | minimum=0,
35 | maximum=150,
36 | elem_id="replacer_hf_steps"
37 | )
38 |
39 | with gr.Row():
40 | comp.hf_denoise = gr.Slider(
41 | label='Hires Denoising',
42 | value=0.35,
43 | step=0.01,
44 | minimum=0.0,
45 | maximum=1.0,
46 | elem_id="replacer_hf_denoise",
47 | )
48 |
49 | with gr.Row():
50 | comp.hf_size_limit = gr.Slider(
51 | label='Limit render size',
52 | value=1800,
53 | step=1,
54 | minimum=700,
55 | maximum=10000,
56 | elem_id="replacer_hf_size_limit",
57 | )
58 |
59 | comp.hf_above_limit_upscaler = gr.Dropdown(
60 | value="Lanczos",
61 | choices=[x.name for x in shared.sd_upscalers],
62 | label="Above limit upscaler",
63 | )
64 |
65 | with gr.Row():
66 | comp.hf_supersampling = gr.Slider(
67 | label='Hires supersampling',
68 | value=1.6,
69 | step=0.1,
70 | minimum=1.0,
71 | maximum=5.0,
72 | elem_id="replacer_hf_supersampling"
73 | )
74 |
75 | with gr.Row():
76 | comp.hf_extra_mask_expand = gr.Slider(
77 | label='Extra mask expand',
78 | value=5,
79 | step=1,
80 | minimum=0,
81 | maximum=200,
82 | elem_id="replacer_hf_extra_mask_expand",
83 | )
84 |
85 | comp.hf_extra_inpaint_padding = gr.Slider(label='Extra inpaint padding',
86 | value=90, elem_id="replacer_hf_extra_inpaint_padding",
87 | minimum=0, maximum=3000, step=1)
88 |
89 | comp.hf_extra_mask_blur = gr.Slider(label='Extra mask blur',
90 | value=2, elem_id="replacer_hf_extra_mask_blur",
91 | minimum=0, maximum=150, step=1)
92 |
93 | with gr.Row():
94 | comp.hf_randomize_seed = gr.Checkbox(
95 | label='Randomize seed for hires fix',
96 | value=True,
97 | elem_id="replacer_hf_randomize_seed",
98 | )
99 |
100 | with gr.Tab('Advanced'):
101 | with gr.Row():
102 | comp.hf_sampler = gr.Dropdown(
103 | label='Hires sampling method',
104 | elem_id="replacer_hf_sampler",
105 | choices=["Use same sampler"] + sd_samplers.visible_sampler_names(),
106 | value="Use same sampler"
107 | )
108 | if IS_WEBUI_1_9:
109 | from modules import sd_schedulers
110 | comp.hf_scheduler = gr.Dropdown(
111 | label='Hires schedule type',
112 | elem_id="replacer_hf_scheduler",
113 | choices=["Use same scheduler"] + [x.label for x in sd_schedulers.schedulers],
114 | value="Use same scheduler"
115 | )
116 | else:
117 | comp.hf_scheduler = gr.Textbox("", visible=False)
118 |
119 | comp.hf_cfg_scale = gr.Slider(
120 | label='Hires CFG Scale',
121 | value=1.0,
122 | step=0.5,
123 | minimum=1.0,
124 | maximum=30.0,
125 | elem_id="replacer_hf_cfg_scale"
126 | )
127 |
128 | with gr.Row():
129 | comp.hf_unload_detection_models = gr.Checkbox(
130 | label='Unload detection models before hires fix',
131 | value=True,
132 | elem_id="replacer_hf_unload_detection_models",
133 | )
134 | if doNotShowUnloadButton():
135 | comp.hf_unload_detection_models.visible = False
136 |
137 | with gr.Row():
138 | placeholder = None
139 | placeholder = getHiresFixPositivePromptSuffixExamples()[0]
140 |
141 | comp.hfPositivePromptSuffix = gr.Textbox(
142 | label="Suffix for positive prompt",
143 | show_label=True,
144 | lines=1,
145 | elem_classes=["hfPositivePromptSuffix"],
146 | placeholder=placeholder,
147 | elem_id="replacer_hfPositivePromptSuffix",
148 | )
149 |
150 | gr.Examples(
151 | examples=getHiresFixPositivePromptSuffixExamples(),
152 | inputs=comp.hfPositivePromptSuffix,
153 | label="",
154 | elem_id="replacer_hfPositivePromptSuffix_examples",
155 | )
156 |
157 | with gr.Row():
158 | comp.hf_positivePrompt = gr.Textbox(label="Override positive prompt",
159 | show_label=True,
160 | lines=1,
161 | elem_classes=["positivePrompt"],
162 | placeholder='leave empty to use the same prompt',
163 | elem_id="replacer_hf_positivePrompt")
164 |
165 | comp.hf_negativePrompt = gr.Textbox(label="Override negative prompt",
166 | show_label=True,
167 | lines=1,
168 | elem_classes=["negativePrompt"],
169 | placeholder='leave empty to use the same prompt',
170 | elem_id="replacer_hf_negativePrompt")
171 |
172 | with gr.Row():
173 | comp.hf_sd_model_checkpoint = gr.Dropdown(label='Hires checkpoint',
174 | elem_id="replacer_hf_sd_model_checkpoint",
175 | choices=getHiresFixCheckpoints(), value="Use same checkpoint")
176 | create_refresh_button(comp.hf_sd_model_checkpoint, modules.sd_models.list_models,
177 | lambda: {"choices": getHiresFixCheckpoints()}, "replacer_hf_sd_model_checkpoint")
178 |
179 | comp.hf_disable_cn = gr.Checkbox(
180 | label='Disable ControlNet while hires fix',
181 | value=True,
182 | elem_id="replacer_hf_disable_cn",
183 | )
184 | if not replacer_extensions.controlnet.SCRIPT:
185 | comp.hf_disable_cn.visible = False
186 |
187 | with gr.Row():
188 | comp.hf_soft_inpaint = gr.Radio(label='Soft inpainting for hires fix',
189 | choices=['Same', 'Enable', 'Disable'],
190 | value='Same', type="value", elem_id="replacer_hf_soft_inpaint")
191 | if not replacer_extensions.soft_inpainting.SCRIPT:
192 | comp.hf_soft_inpaint.visible = False
193 |
194 |
--------------------------------------------------------------------------------
/replacer/ui/replacer_main_ui.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 | from modules import infotext_utils
3 | from replacer.extensions import replacer_extensions
4 | from replacer.ui.tools_ui import AttrDict
5 | from replacer.ui.replacer_tab_ui import getTabUI
6 | from replacer.ui.video.replacer_video_tab_ui import getVideoTabUI
7 |
8 |
9 | try:
10 | from modules.ui_components import ResizeHandleRow
11 | except:
12 | ResizeHandleRow = gr.Row
13 |
14 |
15 |
16 | class ReplacerMainUI:
17 | def __init__(self, isDedicatedPage: bool):
18 | self.replacerTabUI = None
19 | self.replacerVideoTabUI = None
20 | self.components = AttrDict()
21 | self.init_tab(isDedicatedPage)
22 |
23 | def init_tab(self, isDedicatedPage: bool):
24 | comp = AttrDict()
25 | self.replacerTabUI = getTabUI(comp, isDedicatedPage)
26 | self.replacerVideoTabUI = getVideoTabUI(comp, isDedicatedPage)
27 |
28 | self.components = comp
29 |
30 |
31 | def getReplacerTabUI(self):
32 | return self.replacerTabUI
33 |
34 | def getReplacerVideoTabUI(self):
35 | return self.replacerVideoTabUI
36 |
37 |
38 | replacerMainUI: ReplacerMainUI = None
39 | replacerMainUI_dedicated: ReplacerMainUI = None
40 |
41 | registered_param_bindings_main_ui = []
42 |
43 | def initMainUI(*args):
44 | global replacerMainUI, replacerMainUI_dedicated, registered_param_bindings_main_ui
45 | lenBefore = len(infotext_utils.registered_param_bindings)
46 | try:
47 | replacer_extensions.initAllScripts()
48 | replacerMainUI = ReplacerMainUI(isDedicatedPage=False)
49 | replacerMainUI_dedicated = ReplacerMainUI(isDedicatedPage=True)
50 | finally:
51 | replacer_extensions.restoreTemporaryChangedThings()
52 |
53 | registered_param_bindings_main_ui = infotext_utils.registered_param_bindings[lenBefore:]
54 |
55 |
56 | def reinitMainUIAfterUICreated():
57 | replacer_extensions.reinitAllScriptsAfterUICreated()
58 | infotext_utils.registered_param_bindings += registered_param_bindings_main_ui
59 |
60 |
--------------------------------------------------------------------------------
/replacer/ui/tools_ui.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 | from modules import shared, sd_samplers
3 | from modules.ui_components import ToolButton
4 | from modules.api.api import decode_base64_to_image
5 | from replacer.options import ( EXT_NAME, getDetectionPromptExamples, getNegativePromptExamples,
6 | getPositivePromptExamples, useFirstPositivePromptFromExamples, useFirstNegativePromptFromExamples,
7 | getLimitMaskEditingResolution,
8 | )
9 | from replacer.tools import limitImageByOneDimension, generateSeed, pil_to_base64_jpeg
10 | from replacer.generation_args import GenerationArgs
11 |
12 |
13 | try:
14 | from modules import ui_toprow
15 | except:
16 | ui_toprow = None
17 |
18 |
19 |
20 | IS_WEBUI_1_5 = False
21 | if not hasattr(sd_samplers, 'visible_sampler_names'): # webui 1.5
22 | sd_samplers.visible_sampler_names = lambda: [x.name for x in sd_samplers.samplers_for_img2img if x.name not in shared.opts.hide_samplers]
23 | IS_WEBUI_1_5 = True
24 |
25 | try:
26 | from modules.ui_common import OutputPanel # webui 1.8+
27 | IS_WEBUI_1_8 = True
28 | except Exception as e:
29 | IS_WEBUI_1_8 = False
30 |
31 | if IS_WEBUI_1_8:
32 | from modules import infotext_utils
33 |
34 |
35 |
36 | def update_mask_brush_color(color):
37 | return gr.Image.update(brush_color=color)
38 |
39 | def get_current_image(image, isAvoid, needLimit):
40 | if image is None:
41 | return
42 | if needLimit:
43 | image = decode_base64_to_image(image)
44 | image = limitImageByOneDimension(image, getLimitMaskEditingResolution())
45 | image = pil_to_base64_jpeg(image)
46 | return gr.Image.update(image)
47 |
48 |
49 | def unloadModels():
50 | mem_stats = {k: -(v//-(1024*1024)) for k, v in shared.mem_mon.stop().items()}
51 | memBefore = mem_stats['reserved']
52 | from scripts.sam import clear_cache
53 | clear_cache()
54 | mem_stats = {k: -(v//-(1024*1024)) for k, v in shared.mem_mon.stop().items()}
55 | memAfter = mem_stats['reserved']
56 |
57 | text = f'[{EXT_NAME}] {(memBefore - memAfter) / 1024 :.2f} GB of VRAM were freed'
58 | print(text, flush=True)
59 | if not IS_WEBUI_1_5:
60 | gr.Info(text)
61 |
62 |
63 |
64 | def getSubmitJsFunction(galleryId, buttonsId, extraShowButtonsId, fillGalleryIdx):
65 | if not ui_toprow:
66 | return ''
67 | fillGalleryIdxCode = ''
68 | if fillGalleryIdx:
69 | fillGalleryIdxCode = 'arguments_[1] = selected_gallery_index();'
70 | return 'function(){'\
71 | 'var arguments_ = Array.from(arguments);'\
72 | f'{fillGalleryIdxCode}'\
73 | f'arguments_.push("{extraShowButtonsId}", "{buttonsId}", "{galleryId}");'\
74 | 'return submit_replacer.apply(null, arguments_);'\
75 | '}'
76 |
77 |
78 | def sendBackToReplacer(gallery, gallery_index):
79 | assert len(gallery) > 0, 'No image'
80 | if len(gallery) == 1:
81 | gallery_index = 0
82 | assert 0 <= gallery_index < len(gallery), f'Bad image index: {gallery_index}'
83 | assert not IS_WEBUI_1_5, 'sendBackToReplacer is not supported for webui < 1.8'
84 | image_info = gallery[gallery_index] if 0 <= gallery_index < len(gallery) else gallery[0]
85 | image = infotext_utils.image_from_url_text(image_info)
86 | return image
87 |
88 |
89 |
90 |
91 | class OutputPanelWatcher():
92 | send_to_img2img = None
93 | send_to_inpaint = None
94 | send_to_extras = None
95 | send_back_to_replacer = None
96 |
97 |
98 | def watchOutputPanel(component, **kwargs):
99 | elem_id = kwargs.get('elem_id', None)
100 | if elem_id is None:
101 | return
102 |
103 | if elem_id == 'replacer_send_to_img2img' or elem_id == 'img2img_tab':
104 | OutputPanelWatcher.send_to_img2img = component
105 |
106 | if elem_id == 'replacer_send_to_inpaint' or elem_id == 'inpaint_tab':
107 | OutputPanelWatcher.send_to_inpaint = component
108 |
109 | if elem_id == 'replacer_send_to_extras' or elem_id == 'extras_tab':
110 | OutputPanelWatcher.send_to_extras = component
111 | OutputPanelWatcher.send_back_to_replacer = ToolButton('↙️',
112 | elem_id=f'replacer_send_back_to_replacer',
113 | tooltip="Send image back to Replacer's input",
114 | visible=elem_id=='replacer_send_to_extras')
115 |
116 |
117 | class AttrDict(dict):
118 | def __getattr__(self, key):
119 | return self[key]
120 |
121 | def __setattr__(self, key, value):
122 | self[key] = value
123 |
124 | IS_WEBUI_1_9 = hasattr(shared.cmd_opts, 'unix_filenames_sanitization')
125 |
126 |
127 |
128 | def prepareExpectedUIBehavior(gArgs: GenerationArgs):
129 | if gArgs.detectionPrompt == '':
130 | gArgs.detectionPrompt = getDetectionPromptExamples()[0]
131 |
132 | if gArgs.positivePrompt == '' and useFirstPositivePromptFromExamples():
133 | gArgs.positivePrompt = getPositivePromptExamples()[0]
134 |
135 | if gArgs.negativePrompt == '' and useFirstNegativePromptFromExamples():
136 | gArgs.negativePrompt = getNegativePromptExamples()[0]
137 |
138 | if (gArgs.seed == -1):
139 | gArgs.seed = generateSeed()
140 |
141 | if (gArgs.variation_seed == -1):
142 | gArgs.variation_seed = generateSeed()
143 |
144 | gArgs.detectionPrompt = gArgs.detectionPrompt.strip()
145 | gArgs.avoidancePrompt = gArgs.avoidancePrompt.strip()
146 | if gArgs.inpainting_mask_invert:
147 | gArgs.maskExpand = -gArgs.maskExpand
148 | gArgs.hires_fix_args.extra_mask_expand = -gArgs.hires_fix_args.extra_mask_expand
149 |
150 |
151 | custom_script_source = None
152 |
153 | def _setCustomScriptSourceForComponents(text: str):
154 | global custom_script_source
155 | custom_script_source = text
156 |
157 |
158 | class OverrideCustomScriptSource:
159 | def __init__(self, text):
160 | self.text = text
161 |
162 | def __enter__(self):
163 | _setCustomScriptSourceForComponents(self.text)
164 |
165 | def __exit__(self, *args):
166 | _setCustomScriptSourceForComponents(None)
167 |
168 |
169 | def watchSetCustomScriptSourceForComponents(component, **kwargs):
170 | global custom_script_source
171 | if custom_script_source is not None:
172 | component.custom_script_source = custom_script_source
173 |
174 |
175 | try:
176 | from modules.ui_components import ResizeHandleRow
177 | except:
178 | ResizeHandleRow = gr.Row
179 |
--------------------------------------------------------------------------------
/replacer/ui/video/generation.py:
--------------------------------------------------------------------------------
1 | import datetime, os
2 |
3 | from modules import shared
4 | from modules.ui import plaintext_to_html
5 | import gradio as gr
6 | from replacer.generation_args import GenerationArgs, DUMMY_HIRESFIX_ARGS, AnimateDiffArgs
7 | from replacer.ui.tools_ui import prepareExpectedUIBehavior
8 | from replacer.extensions import replacer_extensions
9 | from replacer.video_tools import overrideSettingsForVideo, save_video
10 | from replacer.options import EXT_NAME_LOWER
11 |
12 | from .project import getFrames, getMasks, getOriginalVideoPath
13 | from replacer.video_animatediff import animatediffGenerate
14 |
15 |
16 | def videoGenerateUI(
17 | task_id: str,
18 | project_path: str,
19 | target_video_fps: int,
20 |
21 | ad_fragment_length,
22 | ad_internal_fps,
23 | ad_batch_size,
24 | ad_stride,
25 | ad_overlap,
26 | ad_latent_power,
27 | ad_latent_scale,
28 | ad_freeinit_enable,
29 | ad_freeinit_filter,
30 | ad_freeinit_ds,
31 | ad_freeinit_dt,
32 | ad_freeinit_iters,
33 | ad_generate_only_first_fragment,
34 | ad_cn_inpainting_model,
35 | ad_control_weight,
36 | ad_force_override_sd_model,
37 | ad_force_sd_model_checkpoint,
38 | ad_motion_model,
39 |
40 | detectionPrompt: str,
41 | avoidancePrompt: str,
42 | positivePrompt: str,
43 | negativePrompt: str,
44 | upscalerForImg2Img,
45 | seed,
46 | sampler,
47 | scheduler,
48 | steps,
49 | box_threshold,
50 | mask_expand,
51 | mask_blur,
52 | max_resolution_on_detection,
53 | sam_model_name,
54 | dino_model_name,
55 | cfg_scale,
56 | denoise,
57 | inpaint_padding,
58 | inpainting_fill,
59 | width,
60 | height,
61 | inpainting_mask_invert,
62 | fix_steps,
63 | override_sd_model,
64 | sd_model_checkpoint,
65 | mask_num,
66 | only_custom_mask,
67 | clip_skip,
68 | pass_into_hires_fix_automatically,
69 | save_before_hires_fix,
70 | do_not_use_mask,
71 | rotation_fix: str,
72 | variation_seed: int,
73 | variation_strength: float,
74 | integer_only_masked: bool,
75 | forbid_too_small_crop_region: bool,
76 | correct_aspect_ratio: bool,
77 |
78 | *scripts_args,
79 | ):
80 | cn_args, soft_inpaint_args = replacer_extensions.prepareScriptsArgs(scripts_args)
81 |
82 | animatediff_args = AnimateDiffArgs(
83 | fragment_length=ad_fragment_length,
84 | internal_fps=ad_internal_fps,
85 | batch_size=ad_batch_size,
86 | stride=ad_stride,
87 | overlap=ad_overlap,
88 | latent_power=ad_latent_power,
89 | latent_scale=ad_latent_scale,
90 | freeinit_enable=ad_freeinit_enable,
91 | freeinit_filter=ad_freeinit_filter,
92 | freeinit_ds=ad_freeinit_ds,
93 | freeinit_dt=ad_freeinit_dt,
94 | freeinit_iters=ad_freeinit_iters,
95 | generate_only_first_fragment=ad_generate_only_first_fragment,
96 | cn_inpainting_model=ad_cn_inpainting_model,
97 | control_weight=ad_control_weight,
98 | force_override_sd_model=ad_force_override_sd_model,
99 | force_sd_model_checkpoint=ad_force_sd_model_checkpoint,
100 | motion_model=ad_motion_model,
101 | )
102 |
103 | gArgs = GenerationArgs(
104 | positivePrompt=positivePrompt,
105 | negativePrompt=negativePrompt,
106 | detectionPrompt=detectionPrompt,
107 | avoidancePrompt=avoidancePrompt,
108 | upscalerForImg2Img=upscalerForImg2Img,
109 | seed=seed,
110 | samModel=sam_model_name,
111 | grdinoModel=dino_model_name,
112 | boxThreshold=box_threshold,
113 | maskExpand=mask_expand,
114 | maxResolutionOnDetection=max_resolution_on_detection,
115 |
116 | steps=steps,
117 | sampler_name=sampler,
118 | scheduler=scheduler,
119 | mask_blur=mask_blur,
120 | inpainting_fill=inpainting_fill,
121 | batch_count=1,
122 | batch_size=1,
123 | cfg_scale=cfg_scale,
124 | denoising_strength=denoise,
125 | height=height,
126 | width=width,
127 | inpaint_full_res_padding=inpaint_padding,
128 | img2img_fix_steps=fix_steps,
129 | inpainting_mask_invert=inpainting_mask_invert,
130 |
131 | images=[],
132 | override_sd_model=override_sd_model,
133 | sd_model_checkpoint=sd_model_checkpoint,
134 | mask_num=mask_num,
135 | avoidance_mask=None,
136 | only_custom_mask=only_custom_mask,
137 | custom_mask=None,
138 | use_inpaint_diff=False,
139 | clip_skip=clip_skip,
140 | pass_into_hires_fix_automatically=pass_into_hires_fix_automatically,
141 | save_before_hires_fix=save_before_hires_fix,
142 | do_not_use_mask=do_not_use_mask,
143 | rotation_fix=rotation_fix,
144 | variation_seed=variation_seed,
145 | variation_strength=variation_strength,
146 | integer_only_masked=integer_only_masked,
147 | forbid_too_small_crop_region=forbid_too_small_crop_region,
148 | correct_aspect_ratio=correct_aspect_ratio,
149 |
150 | hires_fix_args=DUMMY_HIRESFIX_ARGS,
151 | cn_args=cn_args,
152 | soft_inpaint_args=soft_inpaint_args,
153 |
154 | animatediff_args=animatediff_args,
155 | )
156 | prepareExpectedUIBehavior(gArgs)
157 |
158 | originalVideo = getOriginalVideoPath(project_path)
159 | if not originalVideo:
160 | raise gr.Error("This project doesn't have original video")
161 | timestamp = int(datetime.datetime.now().timestamp())
162 | fragmentsPath = os.path.join(project_path, 'outputs', str(timestamp))
163 | resultPath = os.path.join(fragmentsPath, "result")
164 | frames = getFrames(project_path)
165 | masks = getMasks(project_path)
166 | if not frames or not masks:
167 | raise gr.Error("This project doesn't have frames or masks")
168 | frames = list(frames)
169 | masks = list(masks)
170 | saveVideoPath = os.path.join(fragmentsPath, f'{EXT_NAME_LOWER}_{os.path.basename(originalVideo)}_{gArgs.seed}.mp4')
171 | if len(saveVideoPath) > 260:
172 | saveVideoPath = os.path.join(fragmentsPath, f'{EXT_NAME_LOWER}_{timestamp}.mp4')
173 |
174 | restore = overrideSettingsForVideo()
175 | try:
176 | animatediffGenerate(gArgs, fragmentsPath, resultPath, frames, masks, target_video_fps)
177 |
178 | shared.state.textinfo = 'video saving'
179 | print("video saving")
180 | save_video(resultPath, target_video_fps, originalVideo, saveVideoPath, gArgs.seed)
181 | finally:
182 | restore()
183 |
184 | return [], "", plaintext_to_html(f"Saved as {saveVideoPath}"), ""
185 |
186 |
--------------------------------------------------------------------------------
/replacer/ui/video/masking.py:
--------------------------------------------------------------------------------
1 | import os, shutil, math, datetime
2 | import gradio as gr
3 | from PIL import Image, ImageOps
4 | from modules import shared
5 |
6 | from replacer.video_tools import separate_video_into_frames
7 | from replacer.tools import limitImageByOneDimension, makePreview, pil_to_base64_jpeg, prepareMask, applyMaskBlur
8 | from replacer.ui.tools_ui import prepareExpectedUIBehavior
9 | from replacer.options import getLimitMaskEditingResolution
10 | from replacer.generation_args import GenerationArgs, DUMMY_HIRESFIX_ARGS
11 | from .project import getOriginalVideoPath, getFrames, getMasks
12 |
13 | from replacer.video_animatediff import detectVideoMasks
14 |
15 |
16 | def prepareMasksDir(project_path: str, fps_out: int):
17 | if not project_path:
18 | raise gr.Error("No project selected")
19 | shared.state.textinfo = "preparing mask dir"
20 | framesDir = os.path.join(project_path, 'frames')
21 | if os.path.exists(framesDir):
22 | assert framesDir.endswith('frames')
23 | shutil.rmtree(framesDir)
24 | os.makedirs(framesDir, exist_ok=True)
25 |
26 | originalVideo = getOriginalVideoPath(project_path)
27 | separate_video_into_frames(originalVideo, fps_out, framesDir, 'png')
28 |
29 | masksDir = os.path.join(project_path, 'masks')
30 | if os.path.exists(masksDir):
31 | timestamp = int(datetime.datetime.now().timestamp())
32 | oldMasksDir = os.path.join(project_path, "old masks")
33 | rmDir = os.path.join(oldMasksDir, str(timestamp))
34 | os.makedirs(oldMasksDir, exist_ok=True)
35 | shutil.move(masksDir, rmDir)
36 | os.makedirs(masksDir, exist_ok=True)
37 |
38 |
39 | def saveMask(project_path: str, mask: Image.Image, number: int):
40 | masksDir = os.path.join(project_path, 'masks')
41 | savePath = os.path.join(masksDir, f'frame_{number}.{shared.opts.samples_format}')
42 | mask.convert('RGB').save(savePath, subsampling=0, quality=93)
43 |
44 |
45 |
46 |
47 | def getMasksPreview(project_path: str, page: int):
48 | if not project_path:
49 | raise gr.Error("No project selected")
50 | frames = list(getFrames(project_path))
51 | masks = list(getMasks(project_path))
52 | totalFrames = len(masks)
53 |
54 | start = page*10
55 | end = min(page*10+10, totalFrames)
56 | frames = frames[start: end]
57 | masks = masks[start: end]
58 |
59 | for i in range(len(frames)):
60 | frames[i] = limitImageByOneDimension(frames[i], getLimitMaskEditingResolution())
61 | masks[i] = masks[i].resize(frames[i].size)
62 |
63 | composited: list[Image.Image] = []
64 | for frame, mask in zip(frames, masks):
65 | composited.append(makePreview(frame, mask))
66 |
67 | for i in range(len(composited), 10):
68 | composited.append(None)
69 |
70 | composited = [pil_to_base64_jpeg(x) for x in composited]
71 |
72 | return page, f"**Page {page+1}/{math.ceil(totalFrames/10)}**", *composited
73 |
74 |
75 |
76 |
77 |
78 | def generateEmptyMasks(task_id, project_path: str, fps_out: int, only_the_first_fragment: bool, fragment_length):
79 | prepareMasksDir(project_path, fps_out)
80 | frames = list(getFrames(project_path))
81 | maxNum = len(frames)
82 | if only_the_first_fragment and fragment_length != 0:
83 | maxNum = fragment_length
84 | for i in range(maxNum):
85 | blackFilling = Image.new('L', frames[0].size, 0)
86 | saveMask(project_path, blackFilling, i)
87 |
88 | return getMasksPreview(project_path, page=0)
89 |
90 |
91 |
92 |
93 | def generateDetectedMasks(task_id, project_path: str, fps_out: int, only_the_first_fragment: bool, fragment_length,
94 | detectionPrompt,
95 | avoidancePrompt,
96 | seed,
97 | sam_model_name,
98 | dino_model_name,
99 | box_threshold,
100 | mask_expand,
101 | mask_blur,
102 | max_resolution_on_detection,
103 | inpainting_mask_invert,
104 | mask_num,
105 | avoidance_mask_mode,
106 | avoidance_mask,
107 | only_custom_mask,
108 | custom_mask_mode,
109 | custom_mask,
110 | do_not_use_mask,
111 | ):
112 |
113 | gArgs = GenerationArgs(
114 | positivePrompt="",
115 | negativePrompt="",
116 | detectionPrompt=detectionPrompt,
117 | avoidancePrompt=avoidancePrompt,
118 | upscalerForImg2Img="",
119 | seed=seed,
120 | samModel=sam_model_name,
121 | grdinoModel=dino_model_name,
122 | boxThreshold=box_threshold,
123 | maskExpand=mask_expand,
124 | maxResolutionOnDetection=max_resolution_on_detection,
125 |
126 | steps=0,
127 | sampler_name="",
128 | scheduler="",
129 | mask_blur=mask_blur,
130 | inpainting_fill=0,
131 | batch_count=0,
132 | batch_size=0,
133 | cfg_scale=0,
134 | denoising_strength=0,
135 | height=0,
136 | width=0,
137 | inpaint_full_res_padding=False,
138 | img2img_fix_steps=False,
139 | inpainting_mask_invert=inpainting_mask_invert,
140 |
141 | images=[],
142 | override_sd_model=False,
143 | sd_model_checkpoint="",
144 | mask_num=mask_num,
145 | avoidance_mask=prepareMask(avoidance_mask_mode, avoidance_mask),
146 | only_custom_mask=only_custom_mask,
147 | custom_mask=prepareMask(custom_mask_mode, custom_mask),
148 | use_inpaint_diff=False,
149 | clip_skip=0,
150 | pass_into_hires_fix_automatically=False,
151 | save_before_hires_fix=False,
152 | do_not_use_mask=do_not_use_mask,
153 | rotation_fix=None,
154 | variation_seed=0,
155 | variation_strength=0,
156 | integer_only_masked=False,
157 | forbid_too_small_crop_region=False,
158 | correct_aspect_ratio=False,
159 |
160 | hires_fix_args=DUMMY_HIRESFIX_ARGS,
161 | cn_args=None,
162 | soft_inpaint_args=None,
163 | )
164 | prepareExpectedUIBehavior(gArgs)
165 |
166 |
167 | prepareMasksDir(project_path, fps_out)
168 | frames = list(getFrames(project_path))
169 | maxNum = len(frames)
170 | if only_the_first_fragment and fragment_length != 0:
171 | maxNum = fragment_length
172 | masksDir = os.path.join(project_path, 'masks')
173 |
174 | detectVideoMasks(gArgs, frames, masksDir, maxNum)
175 |
176 | return [], "", "", ""
177 |
178 |
179 |
180 |
181 |
182 | def reloadMasks(project_path: str, page: int):
183 | if not project_path:
184 | raise gr.Error("No project selected")
185 | masks = getMasks(project_path)
186 | if not masks:
187 | raise gr.Error("This project doesn't have masks")
188 | totalFrames = len(list(masks))
189 | totalPages = math.ceil(totalFrames/10) - 1
190 |
191 | if page > totalPages or page < 0:
192 | page = 0
193 | return getMasksPreview(project_path, page=page)
194 |
195 |
196 | def goNextPage(project_path: str, page: int):
197 | if not project_path:
198 | raise gr.Error("No project selected")
199 | masks = getMasks(project_path)
200 | if not masks:
201 | raise gr.Error("This project doesn't have masks")
202 | totalFrames = len(list(masks))
203 | totalPages = math.ceil(totalFrames/10) - 1
204 | page = page + 1
205 | if page > totalPages:
206 | page = 0
207 | return getMasksPreview(project_path, page=page)
208 |
209 |
210 | def goPrevPage(project_path: str, page: int):
211 | if not project_path:
212 | raise gr.Error("No project selected")
213 | masks = getMasks(project_path)
214 | if not masks:
215 | raise gr.Error("This project doesn't have masks")
216 | page = page - 1
217 | if page < 0:
218 | totalFrames = len(list(masks))
219 | totalPages = math.ceil(totalFrames/10) - 1
220 | page = totalPages
221 | return getMasksPreview(project_path, page=page)
222 |
223 |
224 | def goToPage(project_path: str, page: int):
225 | if not project_path:
226 | raise gr.Error("No project selected")
227 | page = page-1
228 | masks = getMasks(project_path)
229 | if not masks:
230 | raise gr.Error("This project doesn't have masks")
231 | totalFrames = len(list(masks))
232 | totalPages = math.ceil(totalFrames/10) - 1
233 | if page < 0 or page > totalPages:
234 | raise gr.Error(f"Page {page+1} is out of range [1, {totalPages+1}]")
235 | return getMasksPreview(project_path, page=page)
236 |
237 |
238 |
239 |
240 | def processMasks(action: str, project_path: str, page: int, mask_blur: int, masksNew: list[Image.Image]):
241 | if not project_path:
242 | raise gr.Error("No project selected")
243 | masksOld = getMasks(project_path)
244 | if not masksOld:
245 | raise gr.Error("This project doesn't have masks")
246 | masksOld = list(masksOld)
247 | firstMaskIdx = page*10
248 | for idx in range(len(masksNew)):
249 | maskNew = masksNew[idx]
250 | if not maskNew: continue
251 | maskNew = maskNew['mask'].convert('L')
252 | maskNew = applyMaskBlur(maskNew, mask_blur)
253 | maskOld = masksOld[firstMaskIdx+idx].convert('L')
254 |
255 | if action == 'add':
256 | whiteFilling = Image.new('L', maskOld.size, 255)
257 | editedMask = maskOld
258 | editedMask.paste(whiteFilling, maskNew.resize(maskOld.size))
259 | elif action == 'sub':
260 | maskTmp = ImageOps.invert(maskOld)
261 | whiteFilling = Image.new('L', maskTmp.size, 255)
262 | maskTmp.paste(whiteFilling, maskNew.resize(maskOld.size))
263 | editedMask = ImageOps.invert(maskTmp)
264 | saveMask(project_path, editedMask, firstMaskIdx+idx)
265 | return getMasksPreview(project_path, page=page)
266 |
267 |
268 | def addMasks(project_path: str, page: int, mask_blur: int, m1, m2, m3, m4, m5, m6, m7, m8, m9, m10):
269 | processMasks('add', project_path, page, mask_blur, [m1, m2, m3, m4, m5, m6, m7, m8, m9, m10])
270 | return tuple([gr.update()] * 12)
271 |
272 | def subMasks(project_path: str, page: int, mask_blur: int, m1, m2, m3, m4, m5, m6, m7, m8, m9, m10):
273 | processMasks('sub', project_path, page, mask_blur, [m1, m2, m3, m4, m5, m6, m7, m8, m9, m10])
274 | return tuple([gr.update()] * 12)
275 |
276 |
--------------------------------------------------------------------------------
/replacer/ui/video/project.py:
--------------------------------------------------------------------------------
1 | import os, glob, datetime
2 | import gradio as gr
3 | from replacer.tools import convertIntoPath, EXT_NAME, copyOrHardLink
4 | from replacer.video_tools import readImages
5 |
6 |
7 | def getOriginalVideoPath(project_path: str):
8 | files = glob.glob(os.path.join(project_path, "original.*"))
9 | for file in files:
10 | if os.path.isfile(file) or os.path.islink(file):
11 | return file
12 | return None
13 |
14 |
15 | def select(project_path: str):
16 | project_path = convertIntoPath(project_path)
17 | if not getOriginalVideoPath(project_path):
18 | return "❌ Selected path doesn't have original video", ""
19 |
20 | return f"✅ Selected a project {project_path!r}", project_path
21 |
22 |
23 | def init(project_path: str, init_video: str):
24 | project_path = convertIntoPath(project_path)
25 | init_video = convertIntoPath(init_video)
26 | if not project_path:
27 | return "❌ Project path is not entered", "", gr.update()
28 | if not(os.path.isfile(init_video) or os.path.islink(init_video)):
29 | return "❌ Init video is not a file", "", gr.update()
30 | ext = os.path.basename(init_video).split('.')[-1]
31 | original_video = os.path.join(project_path, f'original.{ext}')
32 | os.makedirs(project_path, exist_ok=True)
33 | copyOrHardLink(init_video, original_video)
34 | return f"✅ Selected a new project {project_path!r}", project_path, project_path
35 |
36 |
37 | def genNewProjectPath(init_video: str) -> str:
38 | init_video = convertIntoPath(init_video)
39 | if not init_video:
40 | return ""
41 | timestamp = int(datetime.datetime.now().timestamp())
42 | name = f'{EXT_NAME} project - {timestamp}'
43 | return os.path.join(os.path.dirname(init_video), name)
44 |
45 |
46 | def getFrames(project_path: str):
47 | framesDir = os.path.join(project_path, 'frames')
48 | if not os.path.exists(framesDir):
49 | return None
50 | return readImages(framesDir)
51 |
52 |
53 | def getMasks(project_path: str):
54 | framesDir = os.path.join(project_path, 'masks')
55 | if not os.path.exists(framesDir):
56 | return None
57 | return readImages(framesDir)
58 |
--------------------------------------------------------------------------------
/replacer/ui/video/replacer_video_tab_ui.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 | from replacer.ui.tools_ui import AttrDict, OverrideCustomScriptSource
3 | from replacer.tools import Pause
4 |
5 | from replacer.ui.video.video_options_ui import makeVideoOptionsUI
6 | from replacer.ui.video.video_project_ui import makeVideoProjectUI
7 | from replacer.ui.video.video_masking_ui import makeVideoMaskingUI
8 | from replacer.ui.video.video_generation_ui import makeVideoGenerationUI
9 |
10 |
11 | def getVideoTabUI(mainTabComp: AttrDict, isDedicatedPage: bool):
12 | comp = AttrDict()
13 | with OverrideCustomScriptSource('Video'):
14 | comp.selected_project_status = gr.Markdown("❌ Project is not selected", elem_id="replacer_video_selected_project_status")
15 | comp.selected_project = gr.Textbox(visible=False)
16 |
17 | with gr.Blocks(analytics_enabled=False) as replacerVideoTabUI:
18 | with gr.Tabs():
19 | with gr.Tab("Step 1 (Project)"):
20 | makeVideoProjectUI(comp)
21 | with gr.Tab("Step 2 (Options)"):
22 | makeVideoOptionsUI(comp)
23 | with gr.Tab("Step 3 (Masking)", elem_id="replacer_video_masking_tab"):
24 | makeVideoMaskingUI(comp, mainTabComp)
25 | with gr.Tab("Step 4 (Generation)"):
26 | makeVideoGenerationUI(comp, mainTabComp)
27 | with gr.Row():
28 | comp.selected_project_status.render()
29 | comp.selected_project.render()
30 | comp.pause_button = gr.Button(
31 | f'pause/resume video generation',
32 | elem_id='replacer_video_pause',
33 | visible=True,
34 | elem_classes=["replacer-pause-button"],
35 | variant='compact'
36 | )
37 | comp.pause_button.click(
38 | fn=Pause.toggle
39 | )
40 |
41 | return replacerVideoTabUI
42 |
--------------------------------------------------------------------------------
/replacer/ui/video/video_generation_ui.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 | from replacer.ui.tools_ui import AttrDict, getSubmitJsFunction, ui_toprow, IS_WEBUI_1_8
3 | from modules.ui_common import create_output_panel, update_generation_info
4 | from modules.call_queue import wrap_gradio_gpu_call
5 |
6 | from .generation import videoGenerateUI
7 |
8 | def makeVideoGenerationUI(comp: AttrDict, mainTabComp: AttrDict):
9 | with gr.Row():
10 | if IS_WEBUI_1_8:
11 | outputPanel = create_output_panel('replacer_video', "")
12 | replacer_gallery = outputPanel.gallery
13 | generation_info = outputPanel.generation_info
14 | html_info = outputPanel.infotext
15 | html_log = outputPanel.html_log
16 | else:
17 | replacer_gallery, generation_info, html_info, html_log = \
18 | create_output_panel('replacer_video', "")
19 | generation_info_button = gr.Button(visible=False, elem_id="replacer_video_generation_info_button")
20 | generation_info_button.click(
21 | fn=update_generation_info,
22 | _js="function(x, y, z){ return [x, y, selected_gallery_index()] }",
23 | inputs=[generation_info, html_info, html_info],
24 | outputs=[html_info, html_info],
25 | show_progress=False,
26 | )
27 |
28 | with gr.Row():
29 | if ui_toprow:
30 | toprow = ui_toprow.Toprow(is_compact=True, is_img2img=False, id_part='replacer_video_gen')
31 | toprow.create_inline_toprow_image()
32 | generateButton = toprow.submit
33 | generateButton.variant = 'secondary'
34 | generateButton.value = 'Generate 🎬'
35 | else:
36 | generateButton = gr.Button('Generate 🎬', elem_id='replacer_video_gen_generate')
37 |
38 | generateButton.click(
39 | _js=getSubmitJsFunction('replacer_video', 'replacer_video_gen', '', False),
40 | fn=wrap_gradio_gpu_call(videoGenerateUI, extra_outputs=[None, '', '']),
41 | inputs=[
42 | mainTabComp.dummy_component, # task_id
43 | comp.selected_project,
44 | comp.target_video_fps,
45 |
46 | comp.ad_fragment_length,
47 | comp.ad_internal_fps,
48 | comp.ad_batch_size,
49 | comp.ad_stride,
50 | comp.ad_overlap,
51 | comp.ad_latent_power,
52 | comp.ad_latent_scale,
53 | comp.ad_freeinit_enable,
54 | comp.ad_freeinit_filter,
55 | comp.ad_freeinit_ds,
56 | comp.ad_freeinit_dt,
57 | comp.ad_freeinit_iters,
58 | comp.ad_generate_only_first_fragment,
59 | comp.ad_cn_inpainting_model,
60 | comp.ad_control_weight,
61 | comp.ad_force_override_sd_model,
62 | comp.ad_force_sd_model_checkpoint,
63 | comp.ad_motion_model,
64 |
65 | mainTabComp.detectionPrompt,
66 | mainTabComp.avoidancePrompt,
67 | mainTabComp.positivePrompt,
68 | mainTabComp.negativePrompt,
69 | mainTabComp.upscaler_for_img2img,
70 | mainTabComp.seed,
71 | mainTabComp.sampler,
72 | mainTabComp.scheduler,
73 | mainTabComp.steps,
74 | mainTabComp.box_threshold,
75 | mainTabComp.mask_expand,
76 | mainTabComp.mask_blur,
77 | mainTabComp.max_resolution_on_detection,
78 | mainTabComp.sam_model_name,
79 | mainTabComp.dino_model_name,
80 | mainTabComp.cfg_scale,
81 | mainTabComp.denoise,
82 | mainTabComp.inpaint_padding,
83 | mainTabComp.inpainting_fill,
84 | mainTabComp.width,
85 | mainTabComp.height,
86 | mainTabComp.inpainting_mask_invert,
87 | mainTabComp.fix_steps,
88 | mainTabComp.override_sd_model,
89 | mainTabComp.sd_model_checkpoint,
90 | mainTabComp.mask_num,
91 | mainTabComp.only_custom_mask,
92 | mainTabComp.clip_skip,
93 | mainTabComp.pass_into_hires_fix_automatically,
94 | mainTabComp.save_before_hires_fix,
95 | mainTabComp.do_not_use_mask,
96 | mainTabComp.rotation_fix,
97 | mainTabComp.variation_seed,
98 | mainTabComp.variation_strength,
99 | mainTabComp.integer_only_masked,
100 | mainTabComp.forbid_too_small_crop_region,
101 | mainTabComp.correct_aspect_ratio,
102 | ] + mainTabComp.cn_inputs
103 | + mainTabComp.soft_inpaint_inputs,
104 | outputs=[
105 | replacer_gallery,
106 | generation_info,
107 | html_info,
108 | html_log,
109 | ],
110 | show_progress=ui_toprow is None,
111 | )
112 |
113 |
--------------------------------------------------------------------------------
/replacer/ui/video/video_masking_ui.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 | from replacer.ui.tools_ui import AttrDict, getSubmitJsFunction, ui_toprow, IS_WEBUI_1_8
3 | from replacer.tools import EXT_NAME
4 | from replacer.options import getVideoMaskEditingColorStr
5 | from modules.ui_common import create_output_panel, update_generation_info
6 | from modules.call_queue import wrap_gradio_gpu_call
7 |
8 | from .masking import generateEmptyMasks, reloadMasks, goNextPage, goPrevPage, goToPage, addMasks, subMasks, generateDetectedMasks
9 |
10 |
11 | def getMaskComponent(num: int):
12 | mask = gr.Image(label="Custom mask",
13 | show_label=False,
14 | source="upload",
15 | interactive=True,
16 | type="pil",
17 | tool="sketch",
18 | image_mode="RGB",
19 | brush_color=getVideoMaskEditingColorStr(),
20 | elem_id=f'replacer_video_mask_{num}',
21 | elem_classes='replacer_video_mask',
22 | )
23 | return mask
24 |
25 |
26 | def makeVideoMaskingUI(comp: AttrDict, mainTabComp: AttrDict):
27 | with gr.Row():
28 | reload_masks = gr.Button("⟳ Reload page")
29 | if ui_toprow:
30 | toprow = ui_toprow.Toprow(is_compact=True, is_img2img=False, id_part='replacer_video_masks_detect')
31 | toprow.create_inline_toprow_image()
32 | generate_detected_masks = toprow.submit
33 | generate_detected_masks.variant = 'secondary'
34 | generate_detected_masks.value = 'Generate detected masks'
35 | else:
36 | generate_detected_masks = gr.Button('Generate detected masks', elem_id='replacer_video_masks_detect_generate')
37 | gr.Markdown(f"All detection options including prompt are taken from {EXT_NAME} tab. "
38 | "You can stop masking at any time, and it will cut output video")
39 | generate_empty_masks = gr.Button("Generate empty masks")
40 | with gr.Row():
41 | if IS_WEBUI_1_8:
42 | outputPanel = create_output_panel('replacer_video_masking_progress', "")
43 | replacer_gallery = outputPanel.gallery
44 | generation_info = outputPanel.generation_info
45 | html_info = outputPanel.infotext
46 | html_log = outputPanel.html_log
47 | else:
48 | replacer_gallery, generation_info, html_info, html_log = \
49 | create_output_panel('replacer_video_masking_progress', "")
50 | generation_info_button = gr.Button(visible=False, elem_id="replacer_video_masking_progress_info_button")
51 | generation_info_button.click(
52 | fn=update_generation_info,
53 | _js="function(x, y, z){ return [x, y, selected_gallery_index()] }",
54 | inputs=[generation_info, html_info, html_info],
55 | outputs=[html_info, html_info],
56 | show_progress=False,
57 | )
58 | with gr.Row(elem_id="replacer_video_masking_row_1"):
59 | mask1 = getMaskComponent(1)
60 | mask2 = getMaskComponent(2)
61 | mask3 = getMaskComponent(3)
62 | mask4 = getMaskComponent(4)
63 | mask5 = getMaskComponent(5)
64 | with gr.Row(elem_id="replacer_video_masking_row_2"):
65 | mask6 = getMaskComponent(6)
66 | mask7 = getMaskComponent(7)
67 | mask8 = getMaskComponent(8)
68 | mask9 = getMaskComponent(9)
69 | mask10 = getMaskComponent(10)
70 | with gr.Row():
71 | pageLabel = gr.Markdown("**Page 0/0**", elem_id="replacer_video_masking_page_label")
72 | selectedPage = gr.Number(value=0, visible=False, precision=0)
73 | goPrev = gr.Button("← Prev. page")
74 | goNext = gr.Button("Next page →")
75 | addMasksButton = gr.Button("⧉ Add masks on this page")
76 | subMasksButton = gr.Button("⧉ Subtract masks on this page")
77 | with gr.Row():
78 | pageToGo = gr.Number(label="Page to go", value=1, precision=0, minimum=1)
79 | goToPageButton = gr.Button("Go to page")
80 | gr.Markdown('Quality of masks preview is reduced for performance\n'
81 | 'If you see broken images, just click "Reload page"')
82 | gr.Markdown('You can copy old masks from project nested directory\n'
83 | 'Fps affects on masks')
84 |
85 |
86 |
87 | reload_masks.click(
88 | fn=reloadMasks,
89 | _js='closeAllVideoMasks',
90 | inputs=[comp.selected_project, selectedPage],
91 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
92 | postprocess=False,
93 | )
94 | goPrev.click(
95 | fn=goPrevPage,
96 | _js='closeAllVideoMasks',
97 | inputs=[comp.selected_project, selectedPage],
98 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
99 | postprocess=False,
100 | )
101 | goNext.click(
102 | fn=goNextPage,
103 | _js='closeAllVideoMasks',
104 | inputs=[comp.selected_project, selectedPage],
105 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
106 | postprocess=False,
107 | )
108 | goToPageButton.click(
109 | fn=goToPage,
110 | _js='closeAllVideoMasks',
111 | inputs=[comp.selected_project, pageToGo],
112 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
113 | postprocess=False,
114 | )
115 |
116 |
117 |
118 | addMasksButton.click(
119 | fn=addMasks,
120 | inputs=[comp.selected_project, selectedPage, mainTabComp.mask_blur, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
121 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
122 | ).then(
123 | fn=reloadMasks,
124 | _js='closeAllVideoMasks',
125 | inputs=[comp.selected_project, selectedPage],
126 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
127 | postprocess=False,
128 | )
129 |
130 | subMasksButton.click(
131 | fn=subMasks,
132 | inputs=[comp.selected_project, selectedPage, mainTabComp.mask_blur, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10,],
133 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
134 | ).then(
135 | fn=reloadMasks,
136 | _js='closeAllVideoMasks',
137 | inputs=[comp.selected_project, selectedPage],
138 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
139 | postprocess=False,
140 | )
141 |
142 |
143 |
144 |
145 |
146 | generate_empty_masks.click(
147 | fn=lambda: None,
148 | _js='closeAllVideoMasks',
149 | ).then(
150 | fn=generateEmptyMasks,
151 | inputs=[mainTabComp.dummy_component, comp.selected_project, comp.target_video_fps, comp.ad_generate_only_first_fragment, comp.ad_fragment_length],
152 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
153 | postprocess=False,
154 | )
155 |
156 |
157 | generate_detected_masks.click(
158 | fn=lambda: None,
159 | _js='closeAllVideoMasks',
160 | ).then(
161 | fn=wrap_gradio_gpu_call(generateDetectedMasks, extra_outputs=[None, '', '']),
162 | _js=getSubmitJsFunction('replacer_video_masking_progress', 'replacer_video_masks_detect', '', False),
163 | inputs=[mainTabComp.dummy_component, comp.selected_project, comp.target_video_fps, comp.ad_generate_only_first_fragment, comp.ad_fragment_length,
164 | mainTabComp.detectionPrompt,
165 | mainTabComp.avoidancePrompt,
166 | mainTabComp.seed,
167 | mainTabComp.sam_model_name,
168 | mainTabComp.dino_model_name,
169 | mainTabComp.box_threshold,
170 | mainTabComp.mask_expand,
171 | mainTabComp.mask_blur,
172 | mainTabComp.max_resolution_on_detection,
173 | mainTabComp.inpainting_mask_invert,
174 | mainTabComp.mask_num,
175 | mainTabComp.avoid_mask_mode,
176 | mainTabComp.avoidance_mask,
177 | mainTabComp.only_custom_mask,
178 | mainTabComp.custom_mask_mode,
179 | mainTabComp.custom_mask,
180 | mainTabComp.do_not_use_mask,
181 | ],
182 | outputs=[
183 | replacer_gallery,
184 | generation_info,
185 | html_info,
186 | html_log,
187 | ],
188 | show_progress=ui_toprow is None,
189 | ).then(
190 | fn=reloadMasks,
191 | inputs=[comp.selected_project, selectedPage],
192 | outputs=[selectedPage, pageLabel, mask1, mask2, mask3, mask4, mask5, mask6, mask7, mask8, mask9, mask10],
193 | postprocess=False,
194 | )
195 |
196 |
--------------------------------------------------------------------------------
/replacer/ui/video/video_project_ui.py:
--------------------------------------------------------------------------------
1 | import gradio as gr
2 | from replacer.ui.tools_ui import AttrDict, ResizeHandleRow
3 |
4 | from .project import select, init, genNewProjectPath
5 |
6 |
7 |
8 | def makeVideoProjectUI(comp: AttrDict):
9 | with ResizeHandleRow():
10 | with gr.Column():
11 | gr.Markdown("**--Init--**")
12 | init_video = gr.Textbox(
13 | label="Init video",
14 | placeholder="A video on the same machine where the server is running.",
15 | elem_id="replacer_init_video")
16 | with gr.Row():
17 | project_path = gr.Textbox(
18 | label="Path to project",
19 | elem_id="replacer_video_project_path")
20 | gen_path = gr.Button("Generate path")
21 | init_button = gr.Button("Init")
22 |
23 | with gr.Column():
24 | gr.Markdown("**--Select--**")
25 | select_path = gr.Textbox(label="Project path")
26 | select_button = gr.Button("Select")
27 |
28 | gen_path.click(fn=genNewProjectPath, inputs=[init_video], outputs=[project_path])
29 | init_button.click(fn=init, inputs=[project_path, init_video], outputs=[comp.selected_project_status, comp.selected_project, select_path])
30 | select_button.click(fn=select, inputs=[select_path], outputs=[comp.selected_project_status, comp.selected_project])
31 |
32 |
--------------------------------------------------------------------------------
/replacer/video_animatediff.py:
--------------------------------------------------------------------------------
1 | import os, copy, math, shutil
2 | from PIL import Image, ImageChops
3 | from tqdm import tqdm
4 | from modules import shared, errors
5 | from replacer.generation_args import GenerationArgs
6 | from replacer.mask_creator import createMask, NothingDetectedError
7 | from replacer.inpaint import inpaint
8 | from replacer.generate import generateSingle
9 | from replacer.tools import ( interrupted, applyMaskBlur, clearCache, applyRotationFix, removeRotationFix,
10 | Pause, extraMaskExpand,
11 | )
12 | from replacer.video_tools import fastFrameSave
13 | from replacer.extensions import replacer_extensions
14 |
15 |
16 |
17 | def processFragment(fragmentPath: str, initImage: Image.Image, gArgs: GenerationArgs):
18 | initImage = applyRotationFix(initImage, gArgs.rotation_fix)
19 | fastFrameSave(initImage, os.path.join(fragmentPath, 'frames'), 0)
20 | gArgs = gArgs.copy()
21 | gArgs.inpainting_mask_invert = False
22 | gArgs.mask_blur = 0
23 | gArgs.animatediff_args.needApplyAnimateDiff = True
24 | gArgs.animatediff_args.video_path = os.path.join(fragmentPath, 'frames')
25 | gArgs.animatediff_args.mask_path = os.path.join(fragmentPath, 'masks')
26 | processed, _ = inpaint(initImage, gArgs)
27 |
28 | outDir = os.path.join(fragmentPath, 'out')
29 | for idx in range(len(processed.images)):
30 | fastFrameSave(processed.images[idx], outDir, idx)
31 |
32 | return processed
33 |
34 |
35 | def detectVideoMasks(gArgs: GenerationArgs, frames: list[Image.Image], masksPath: str, maxNum: int|None) -> None:
36 | blackFilling = Image.new('L', frames[0].size, 0).convert('RGBA')
37 | if not maxNum:
38 | maxNum = len(frames)
39 | mask = None
40 | shared.state.job_count = maxNum
41 | Pause.paused = False
42 |
43 | for idx in range(maxNum):
44 | Pause.wait()
45 | if interrupted(): return
46 | shared.state.textinfo = f"generating mask {idx+1} / {maxNum}"
47 | print(f" {idx+1} / {maxNum}")
48 |
49 | frame = frames[idx].convert('RGBA')
50 | try:
51 | mask = createMask(frame, gArgs).mask
52 | if gArgs.inpainting_mask_invert:
53 | mask = ImageChops.invert(mask.convert('L'))
54 | mask = applyMaskBlur(mask.convert('RGBA'), gArgs.mask_blur)
55 | mask = mask.resize(frame.size)
56 | except NothingDetectedError as e:
57 | print(e)
58 | if mask is None or mask is blackFilling:
59 | mask = blackFilling
60 | else:
61 | mask = extraMaskExpand(mask, 50)
62 |
63 | fastFrameSave(mask, masksPath, idx)
64 | shared.state.nextjob()
65 |
66 |
67 |
68 | def getFragments(gArgs: GenerationArgs, fragments_path: str, frames: list[Image.Image], masks: list[Image.Image], totalFragments: int):
69 | fragmentSize = gArgs.animatediff_args.fragment_length
70 |
71 | fragmentNum = 0
72 | frameInFragmentIdx = fragmentSize
73 | fragmentPath: str = None
74 | framesDir: str = None
75 | masksDir: str = None
76 | outDir: str = None
77 | frame: Image.Image = None
78 | mask: Image.Image = None
79 |
80 |
81 | for frameIdx in range(len(masks)):
82 | if frameInFragmentIdx == fragmentSize:
83 | if fragmentPath is not None:
84 | text = f"inpainting fragment {fragmentNum} / {totalFragments}"
85 | print(text)
86 | shared.state.textinfo = text
87 | yield fragmentPath
88 | frameInFragmentIdx = 0
89 | fragmentNum += 1
90 | fragmentPath = os.path.join(fragments_path, f"fragment_{fragmentNum}")
91 |
92 | framesDir = os.path.join(fragmentPath, 'frames'); os.makedirs(framesDir, exist_ok=True)
93 | masksDir = os.path.join(fragmentPath, 'masks'); os.makedirs(masksDir, exist_ok=True)
94 | outDir = os.path.join(fragmentPath, 'out'); os.makedirs(outDir, exist_ok=True)
95 |
96 | # last frame goes first in the next fragment
97 | if mask is not None:
98 | fastFrameSave(frame, framesDir, frameInFragmentIdx)
99 | fastFrameSave(mask, masksDir, frameInFragmentIdx)
100 | frameInFragmentIdx = 1
101 |
102 | Pause.wait()
103 | if interrupted(): return
104 | print(f" Preparing frame in fragment {fragmentNum}: {frameInFragmentIdx+1} / {fragmentSize}")
105 |
106 | frame = frames[frameIdx]
107 | mask = masks[frameIdx]
108 |
109 | frame = applyRotationFix(frame, gArgs.rotation_fix)
110 | fastFrameSave(frame, framesDir, frameInFragmentIdx)
111 | mask = applyRotationFix(mask, gArgs.rotation_fix)
112 | fastFrameSave(mask, masksDir, frameInFragmentIdx)
113 | frameInFragmentIdx += 1
114 |
115 | if frameInFragmentIdx > 1:
116 | for idx in range(frameInFragmentIdx+1, min(fragmentSize, 12)):
117 | fastFrameSave(frame, framesDir, idx)
118 | fastFrameSave(mask, masksDir, idx)
119 |
120 | text = f"inpainting fragment {fragmentNum} / {totalFragments}"
121 | print(text)
122 | shared.state.textinfo = text
123 | yield fragmentPath
124 |
125 |
126 | def animatediffGenerate(gArgs: GenerationArgs, fragments_path: str, result_dir: str,
127 | frames: list[Image.Image], masks: list[Image.Image], video_fps: float):
128 | if gArgs.animatediff_args.force_override_sd_model:
129 | gArgs.override_sd_model = True
130 | gArgs.sd_model_checkpoint = gArgs.animatediff_args.force_sd_model_checkpoint
131 | if gArgs.animatediff_args.internal_fps <= 0:
132 | gArgs.animatediff_args.internal_fps = video_fps
133 | if gArgs.animatediff_args.fragment_length <= 0 or len(masks) < gArgs.animatediff_args.fragment_length:
134 | gArgs.animatediff_args.fragment_length = len(masks)
135 | gArgs.animatediff_args.needApplyCNForAnimateDiff = True
136 |
137 | totalFragments = math.ceil((len(masks) - 1) / (gArgs.animatediff_args.fragment_length - 1))
138 | if gArgs.animatediff_args.generate_only_first_fragment:
139 | totalFragments = 1
140 |
141 | try:
142 | shared.state.textinfo = f"processing the first frame. Total fragments number = {totalFragments}"
143 | firstFrameGArgs = gArgs.copy()
144 | firstFrameGArgs.only_custom_mask = True
145 | firstFrameGArgs.custom_mask = masks[0]
146 | processedFirstImg, _ = generateSingle(frames[0], firstFrameGArgs, "", "", False, [], None)
147 | initImage: Image.Image = processedFirstImg.images[0]
148 | except NothingDetectedError as e:
149 | print(e)
150 | initImage: Image.Image = copy.copy(frames[0])
151 |
152 | oldJob = shared.state.job
153 | shared.state.end()
154 | shared.state.begin(oldJob + '_animatediff_inpaint')
155 | shared.state.job_count = totalFragments
156 | shared.total_tqdm.clear()
157 | shared.total_tqdm.updateTotal(totalFragments * gArgs.totalSteps())
158 | Pause.paused = False
159 |
160 | fragmentPaths = []
161 |
162 | try:
163 | for fragmentPath in getFragments(gArgs, fragments_path, frames, masks, totalFragments):
164 | if not shared.cmd_opts.lowram: # do not confuse with lowvram. lowram is for really crazy people
165 | clearCache()
166 | processed = processFragment(fragmentPath, initImage, gArgs)
167 | fragmentPaths.append(fragmentPath)
168 | initImage = processed.images[-1]
169 | if gArgs.animatediff_args.generate_only_first_fragment:
170 | break
171 | if interrupted():
172 | break
173 | except Exception as e:
174 | if type(e) is replacer_extensions.controlnet.UnitIsReserved:
175 | raise
176 | errors.report(f'{e} ***', exc_info=True)
177 |
178 |
179 | text = "merging fragments"
180 | shared.state.textinfo = text
181 | print(text)
182 | def readImages(input_dir: str):
183 | image_list = shared.listfiles(input_dir)
184 | for filepath in image_list:
185 | image = Image.open(filepath).convert('RGBA')
186 | image.original_path = filepath
187 | yield image
188 | def saveImage(image: Image.Image):
189 | if not image: return
190 | savePath = os.path.join(result_dir, f"{frameNum:05d}-{gArgs.seed}.{shared.opts.samples_format}")
191 | if hasattr(image, 'original_path') and image.original_path:
192 | shutil.copy(image.original_path, savePath)
193 | else:
194 | image.convert('RGB').save(savePath)
195 | os.makedirs(result_dir, exist_ok=True)
196 | theLastImage = None
197 | frameNum = 0
198 |
199 | for fragmentPath in tqdm(fragmentPaths):
200 | images = list(readImages(os.path.join(fragmentPath, 'out')))
201 | if len(images) <= 1:
202 | break
203 | if theLastImage:
204 | images[0] = Image.blend(images[0], theLastImage, 0.5)
205 | theLastImage = images[-1]
206 | images = images[:-1]
207 |
208 | for image in images:
209 | if frameNum >= len(masks):
210 | break
211 | saveImage(image)
212 | frameNum += 1
213 | saveImage(theLastImage)
214 |
215 |
216 |
--------------------------------------------------------------------------------
/replacer/video_tools.py:
--------------------------------------------------------------------------------
1 | import subprocess
2 | import cv2
3 | import os
4 | import modules.shared as shared
5 | from PIL import Image
6 | from shutil import rmtree
7 | from replacer.generation_args import GenerationArgs
8 | try:
9 | from imageio_ffmpeg import get_ffmpeg_exe
10 | FFMPEG = get_ffmpeg_exe()
11 | except Exception as e:
12 | FFMPEG = 'ffmpeg'
13 |
14 |
15 | def runFFMPEG(*ffmpeg_cmd):
16 | ffmpeg_cmd = [FFMPEG] + list(ffmpeg_cmd)
17 | print(' '.join(f"'{str(v)}'" if ' ' in str(v) else str(v) for v in ffmpeg_cmd))
18 | rc = subprocess.run(ffmpeg_cmd).returncode
19 | if rc != 0:
20 | raise Exception(f'ffmpeg exited with code {rc}. See console for details')
21 |
22 |
23 |
24 | def separate_video_into_frames(video_path, fps_out, out_path, ext):
25 | assert video_path, 'video not selected'
26 | assert out_path, 'out path not specified'
27 |
28 | # Create the temporary folder if it doesn't exist
29 | os.makedirs(out_path, exist_ok=True)
30 |
31 | # Open the video file
32 | assert fps_out != 0, "fps can't be 0"
33 |
34 | runFFMPEG(
35 | '-i', video_path,
36 | '-vf', f'fps={fps_out}',
37 | '-y',
38 | os.path.join(out_path, f'frame_%05d.{ext}'),
39 | )
40 |
41 |
42 | def readImages(input_dir):
43 | assert input_dir, 'input directory not selected'
44 | image_list = shared.listfiles(input_dir)
45 | for filename in image_list:
46 | try:
47 | image = Image.open(filename)
48 | except Exception:
49 | continue
50 | yield image
51 |
52 |
53 | def getVideoFrames(video_path, fps):
54 | assert video_path, 'video not selected'
55 | temp_folder = os.path.join(os.path.dirname(video_path), 'temp')
56 | if os.path.exists(temp_folder):
57 | rmtree(temp_folder)
58 | fps_in, fps_out = separate_video_into_frames(video_path, fps, temp_folder)
59 | return readImages(temp_folder), fps_in, fps_out
60 |
61 |
62 | def save_video(frames_dir, fps, org_video, output_path, seed):
63 | runFFMPEG(
64 | '-framerate', str(fps),
65 | '-i', os.path.join(frames_dir, f'%5d-{seed}.{shared.opts.samples_format}'),
66 | '-r', str(fps),
67 | '-i', org_video,
68 | '-map', '0:v:0',
69 | '-map', '1:a:0?',
70 | '-c:v', 'libx264',
71 | '-c:a', 'aac',
72 | '-vf', f'fps={fps}',
73 | '-profile:v', 'main',
74 | '-pix_fmt', 'yuv420p',
75 | '-shortest',
76 | '-y',
77 | output_path
78 | )
79 |
80 |
81 | def fastFrameSave(image: Image.Image, path: str, idx):
82 | savePath = os.path.join(path, f'frame_{idx}.{shared.opts.samples_format}')
83 | image.convert('RGB').save(savePath, subsampling=0, quality=93)
84 |
85 |
86 | def overrideSettingsForVideo():
87 | old_samples_filename_pattern = shared.opts.samples_filename_pattern
88 | old_save_images_add_number = shared.opts.save_images_add_number
89 | old_controlnet_ignore_noninpaint_mask = shared.opts.data.get("controlnet_ignore_noninpaint_mask", False)
90 | def restoreOpts():
91 | shared.opts.samples_filename_pattern = old_samples_filename_pattern
92 | shared.opts.save_images_add_number = old_save_images_add_number
93 | shared.opts.data["controlnet_ignore_noninpaint_mask"] = old_controlnet_ignore_noninpaint_mask
94 | shared.opts.samples_filename_pattern = "[seed]"
95 | shared.opts.save_images_add_number = True
96 | shared.opts.data["controlnet_ignore_noninpaint_mask"] = True
97 | return restoreOpts
98 |
99 |
100 |
101 | def getFpsFromVideo(video_path: str) -> float:
102 | video = cv2.VideoCapture(video_path)
103 | fps = video.get(cv2.CAP_PROP_FPS)
104 | video.release()
105 | return fps
106 |
107 | FREEINIT_filter_type_list = [
108 | "butterworth",
109 | "gaussian",
110 | "box",
111 | "ideal"
112 | ]
113 |
--------------------------------------------------------------------------------
/scripts/replacer_api.py:
--------------------------------------------------------------------------------
1 | from typing import Any
2 | from fastapi import FastAPI, Body
3 | from pydantic import BaseModel
4 | import modules.script_callbacks as script_callbacks
5 | from modules import shared
6 | from modules.api.api import encode_pil_to_base64, decode_base64_to_image
7 | from modules.call_queue import queue_lock
8 | from replacer.generate import generate
9 | from replacer.generation_args import GenerationArgs, HiresFixArgs
10 | from replacer.tools import generateSeed
11 | from replacer.ui.tools_ui import IS_WEBUI_1_9, prepareExpectedUIBehavior
12 | from replacer.extensions import replacer_extensions
13 |
14 |
15 |
16 |
17 | def replacer_api(_, app: FastAPI):
18 | from scripts.sam import sam_model_list
19 | from scripts.dino import dino_model_list
20 | try:
21 | from lama_cleaner_masked_content.inpaint import lamaInpaint
22 | lama_cleaner_available = True
23 | except Exception as e:
24 | lama_cleaner_available = False
25 |
26 | class ReplaceRequest(BaseModel):
27 | input_image: str = "base64 image"
28 | detection_prompt: str = ""
29 | avoidance_prompt: str = ""
30 | positive_prompt: str = ""
31 | negative_prompt: str = ""
32 | width: int = 512
33 | height: int = 512
34 | sam_model_name: str = sam_model_list[0] if sam_model_list else ""
35 | dino_model_name: str = dino_model_list[0]
36 | seed: int = -1
37 | sampler: str = "DPM++ 2M SDE" if IS_WEBUI_1_9 else "DPM++ 2M SDE Karras"
38 | scheduler: str = "Automatic"
39 | steps: int = 20
40 | box_threshold: float = 0.3
41 | mask_expand: int = 35
42 | mask_blur: int = 4
43 | mask_num: str = "Random"
44 | max_resolution_on_detection = 1280
45 | cfg_scale: float = 5.5
46 | denoise: float = 1.0
47 | inpaint_padding = 40
48 | inpainting_mask_invert: bool = False
49 | upscaler_for_img2img : str = ""
50 | fix_steps : bool = False
51 | inpainting_fill : int = 0
52 | sd_model_checkpoint : str = ""
53 | clip_skip: int = 1
54 | rotation_fix: str = '-' # choices: '-', '⟲', '⟳', '🗘'
55 | extra_include: list = ["mask", "box", "cut", "preview", "script"]
56 | variation_seed: int = -1
57 | variation_strength: float = 0.0
58 | integer_only_masked: bool = False
59 | forbid_too_small_crop_region: bool = True
60 | correct_aspect_ratio: bool = True
61 | avoidance_mask: str = "base64 image"
62 | custom_mask: str = "base64 image"
63 | only_custom_mask: bool = True # only if there is a custom mask
64 |
65 | use_hires_fix: bool = False
66 | hf_upscaler: str = "ESRGAN_4x"
67 | hf_steps: int = 4
68 | hf_sampler: str = "Use same sampler"
69 | hf_scheduler: str = "Use same scheduler"
70 | hf_denoise: float = 0.35
71 | hf_cfg_scale: float = 1.0
72 | hf_positive_prompt_suffix: str = ""
73 | hf_size_limit: int = 1800
74 | hf_above_limit_upscaler: str = "Lanczos"
75 | hf_unload_detection_models: bool = True
76 | hf_disable_cn: bool = True
77 | hf_extra_mask_expand: int = 5
78 | hf_positive_prompt: str = ""
79 | hf_negative_prompt: str = ""
80 | hf_sd_model_checkpoint: str = "Use same checkpoint"
81 | hf_extra_inpaint_padding: int = 250
82 | hf_extra_mask_blur: int = 2
83 | hf_randomize_seed: bool = True
84 | hf_soft_inpaint: str = "Same"
85 | hf_supersampling: float = 1.6
86 |
87 | scripts : dict = {} # ControlNet and Soft Inpainting. See apiExample.py for example
88 |
89 |
90 | @app.post("/replacer/replace")
91 | async def api_replacer_replace(data: ReplaceRequest = Body(...)) -> Any:
92 | image = decode_base64_to_image(data.input_image).convert("RGBA")
93 | avoidance_mask = None
94 | if isinstance(data.avoidance_mask, str) and len(data.avoidance_mask) > 20:
95 | avoidance_mask = decode_base64_to_image(data.avoidance_mask)
96 | custom_mask = None
97 | if isinstance(data.custom_mask, str) and len(data.custom_mask) > 20:
98 | custom_mask = decode_base64_to_image(data.custom_mask)
99 |
100 | cn_args, soft_inpaint_args = replacer_extensions.prepareScriptsArgs_api(data.scripts)
101 |
102 | hires_fix_args = HiresFixArgs(
103 | upscaler = data.hf_upscaler,
104 | steps = data.hf_steps,
105 | sampler = data.hf_sampler,
106 | scheduler=data.hf_scheduler,
107 | denoise = data.hf_denoise,
108 | cfg_scale = data.hf_cfg_scale,
109 | positive_prompt_suffix = data.hf_positive_prompt_suffix,
110 | size_limit = data.hf_size_limit,
111 | above_limit_upscaler = data.hf_above_limit_upscaler,
112 | unload_detection_models = data.hf_unload_detection_models,
113 | disable_cn = data.hf_disable_cn,
114 | extra_mask_expand = data.hf_extra_mask_expand,
115 | positive_prompt = data.hf_positive_prompt,
116 | negative_prompt = data.hf_negative_prompt,
117 | sd_model_checkpoint = data.hf_sd_model_checkpoint,
118 | extra_inpaint_padding = data.hf_extra_inpaint_padding,
119 | extra_mask_blur = data.hf_extra_mask_blur,
120 | randomize_seed = data.hf_randomize_seed,
121 | soft_inpaint = data.hf_soft_inpaint,
122 | supersampling = data.hf_supersampling,
123 | )
124 |
125 | gArgs = GenerationArgs(
126 | positivePrompt=data.positive_prompt,
127 | negativePrompt=data.negative_prompt,
128 | detectionPrompt=data.detection_prompt,
129 | avoidancePrompt=data.avoidance_prompt,
130 | upscalerForImg2Img=data.upscaler_for_img2img,
131 | seed=data.seed,
132 | samModel=data.sam_model_name,
133 | grdinoModel=data.dino_model_name,
134 | boxThreshold=data.box_threshold,
135 | maskExpand=data.mask_expand,
136 | maxResolutionOnDetection=data.max_resolution_on_detection,
137 |
138 | steps=data.steps,
139 | sampler_name=data.sampler,
140 | scheduler=data.scheduler,
141 | mask_blur=data.mask_blur,
142 | inpainting_fill=data.inpainting_fill,
143 | batch_count=1,
144 | batch_size=1,
145 | cfg_scale=data.cfg_scale,
146 | denoising_strength=data.denoise,
147 | height=data.height,
148 | width=data.width,
149 | inpaint_full_res_padding=data.inpaint_padding,
150 | img2img_fix_steps=data.fix_steps,
151 | inpainting_mask_invert=data.inpainting_mask_invert,
152 |
153 | images=[image],
154 | override_sd_model=True,
155 | sd_model_checkpoint=data.sd_model_checkpoint,
156 | mask_num=data.mask_num,
157 | avoidance_mask=avoidance_mask,
158 | only_custom_mask=data.only_custom_mask,
159 | custom_mask=custom_mask,
160 | use_inpaint_diff=False,
161 | clip_skip=data.clip_skip,
162 | pass_into_hires_fix_automatically=data.use_hires_fix,
163 | save_before_hires_fix=False,
164 | do_not_use_mask=False,
165 | rotation_fix=data.rotation_fix,
166 | variation_seed=data.variation_seed,
167 | variation_strength=data.variation_strength,
168 | integer_only_masked=data.integer_only_masked,
169 | forbid_too_small_crop_region=data.forbid_too_small_crop_region,
170 | correct_aspect_ratio=data.correct_aspect_ratio,
171 |
172 | hires_fix_args=hires_fix_args,
173 | cn_args=cn_args,
174 | soft_inpaint_args=soft_inpaint_args,
175 | )
176 | prepareExpectedUIBehavior(gArgs)
177 |
178 | with queue_lock:
179 | shared.state.begin('api /replacer/replace')
180 | try:
181 | processed, allExtraImages = generate(gArgs, "", False, data.extra_include)
182 | finally:
183 | shared.state.end()
184 |
185 | return {
186 | "image": encode_pil_to_base64(processed.images[0]).decode(),
187 | "extra_images": [encode_pil_to_base64(x).decode() for x in allExtraImages],
188 | "info": processed.info,
189 | "json": processed.js(),
190 | }
191 |
192 |
193 | @app.post("/replacer/available_options")
194 | async def api_replacer_available_options() -> Any:
195 | return {
196 | "sam_model_name": sam_model_list,
197 | "dino_model_name": dino_model_list,
198 | "upscalers": [""] + [x.name for x in shared.sd_upscalers],
199 | "lama_cleaner_available": lama_cleaner_available, # inpainting_fill=4, https://github.com/light-and-ray/sd-webui-lama-cleaner-masked-content
200 | "available_scripts": replacer_extensions.getAvailableScripts_api(),
201 | }
202 |
203 |
204 | script_callbacks.on_app_started(replacer_api)
205 |
206 |
--------------------------------------------------------------------------------
/scripts/replacer_main_ui.py:
--------------------------------------------------------------------------------
1 | import copy
2 | import gradio as gr
3 | from modules import script_callbacks, progress, shared, errors, ui_postprocessing
4 | from replacer.options import (EXT_NAME, EXT_NAME_LOWER, needHideSegmentAnythingAccordions,
5 | getDedicatedPagePath, on_ui_settings, needHideAnimateDiffAccordions, hideVideoInMainUI, extrasInDedicated,
6 | )
7 | from replacer.ui.tools_ui import IS_WEBUI_1_5
8 | from replacer.ui import replacer_main_ui
9 | from replacer.tools import getReplacerFooter
10 | from replacer.ui.tools_ui import watchOutputPanel, watchSetCustomScriptSourceForComponents
11 | from replacer.extensions import replacer_extensions
12 |
13 |
14 |
15 | def on_ui_tabs():
16 | result = []
17 | replacer_main_ui.reinitMainUIAfterUICreated()
18 | tab = replacer_main_ui.replacerMainUI.getReplacerTabUI()
19 | result.append((tab, EXT_NAME, EXT_NAME))
20 | if not hideVideoInMainUI():
21 | video_tab = replacer_main_ui.replacerMainUI.getReplacerVideoTabUI()
22 | video_title = f"{EXT_NAME} - video"
23 | result.append((video_tab, video_title, video_title))
24 | return result
25 |
26 | script_callbacks.on_ui_tabs(on_ui_tabs)
27 |
28 |
29 | def mountDedicatedPage(demo, app):
30 | try:
31 | path = getDedicatedPagePath()
32 | app.add_api_route(f"{path}/internal/progress",
33 | progress.progressapi, methods=["POST"],
34 | response_model=progress.ProgressResponse)
35 | replacer_extensions.image_comparison.preloadImageComparisonTab()
36 |
37 | with gr.Blocks(title=EXT_NAME, analytics_enabled=False) as replacerUi:
38 | gr.Textbox(elem_id="txt2img_prompt", visible=False) # triggers onUiLoaded
39 | gr.Textbox(value=shared.opts.dumpjson(), elem_id="settings_json", visible=False)
40 |
41 | with gr.Tabs(elem_id='tabs'): # triggers progressbar
42 | with gr.Tab(label=f"{EXT_NAME} dedicated", elem_id=f"tab_{EXT_NAME_LOWER}_dedicated"):
43 | tab = replacer_main_ui.replacerMainUI_dedicated.getReplacerTabUI()
44 | tab.render()
45 | with gr.Tab(label="Video", elem_id=f"tab_video"):
46 | tab_video = replacer_main_ui.replacerMainUI_dedicated.getReplacerVideoTabUI()
47 | tab_video.render()
48 | if extrasInDedicated():
49 | with gr.Tab(label="Extras", elem_id="extras"):
50 | with gr.Blocks(analytics_enabled=False) as extras_interface:
51 | ui_postprocessing.create_ui()
52 | replacer_extensions.image_comparison.mountImageComparisonTab()
53 |
54 | footer = getReplacerFooter()
55 | gr.HTML(footer, elem_id="footer")
56 |
57 | loadsave = copy.copy(demo.ui_loadsave)
58 | loadsave.finalized_ui = False
59 | video_title = f"{EXT_NAME} - video"
60 | loadsave.add_block(tab, EXT_NAME)
61 | loadsave.add_block(tab_video, video_title)
62 | if extrasInDedicated():
63 | loadsave.add_block(extras_interface, "extras")
64 | loadsave.dump_defaults()
65 | replacerUi.ui_loadsave = loadsave
66 | gr.mount_gradio_app(app, replacerUi, path=path)
67 | except Exception as e:
68 | errors.report(f'[{EXT_NAME}] error while creating dedicated page: {e}', exc_info=True)
69 |
70 | script_callbacks.on_app_started(mountDedicatedPage)
71 |
72 |
73 | def hideSegmentAnythingAccordions(component, **kwargs):
74 | if type(component) is gr.Accordion and\
75 | getattr(component, 'label', "") == "Segment Anything":
76 |
77 | component.visible = False
78 | print(f"[{EXT_NAME}] Segment Anything accordion has been hidden")
79 |
80 | if needHideSegmentAnythingAccordions():
81 | script_callbacks.on_after_component(hideSegmentAnythingAccordions)
82 |
83 |
84 | def hideAnimateDiffAccordions(component, **kwargs):
85 | if type(component) is gr.Accordion and\
86 | getattr(component, 'label', "") == "AnimateDiff":
87 |
88 | component.visible = False
89 | print(f"[{EXT_NAME}] AnimateDiff accordion has been hidden")
90 |
91 | if needHideAnimateDiffAccordions():
92 | script_callbacks.on_after_component(hideAnimateDiffAccordions)
93 |
94 |
95 | script_callbacks.on_before_ui(replacer_main_ui.initMainUI)
96 | script_callbacks.on_after_component(replacer_extensions.controlnet.watchControlNetUI)
97 | script_callbacks.on_after_component(replacer_extensions.soft_inpainting.watchSoftInpaintUI)
98 | script_callbacks.on_after_component(watchOutputPanel)
99 | script_callbacks.on_after_component(watchSetCustomScriptSourceForComponents)
100 | script_callbacks.on_ui_settings(on_ui_settings)
101 | script_callbacks.on_after_component(replacer_extensions.image_comparison.addButtonIntoComparisonTab)
102 | script_callbacks.on_after_component(replacer_extensions.image_comparison.watchImageComparison)
103 |
--------------------------------------------------------------------------------
/style.css:
--------------------------------------------------------------------------------
1 | div.replacer-batch-count-size{
2 | min-width: 11em !important;
3 | }
4 |
5 | div.replacer-generation-size{
6 | min-width: 11em !important;
7 | }
8 |
9 | button.replacer-pause-button{
10 | text-align: left;
11 | width: fit-content;
12 | left: 0;
13 | }
14 |
15 | #replacer_video_masking_page_label {
16 | height: 40px !important;
17 | font-size: 120%;
18 | text-align: center;
19 | }
20 |
21 | #replacer_video_masking_progress_gallery_container {
22 | display: none;
23 | }
24 |
25 | #image_buttons_replacer_video_masking_progress {
26 | display: none;
27 | }
28 |
29 | #replacer_video_masking_progress_results_panel {
30 | background: unset;
31 | margin-top: 20px;
32 | margin-bottom: 3px;
33 | padding: unset;
34 | }
35 |
36 | #replacer_video_gallery {
37 | height: 440px;
38 | }
39 |
--------------------------------------------------------------------------------