├── .gitignore
├── README.md
├── misc
├── ddetailer_example_1.png
├── ddetailer_example_2.png
└── ddetailer_example_3.gif
└── scripts
└── ddetailer.py
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | *.ckpt
3 | *.pth
4 | /tmp
5 | /outputs
6 | /log
7 | .vscode
8 | /test-cases
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | Detection and img2img have come a long way. This project is no longer maintained and there are now several alternatives for this function. See [μ Detection Detailer](https://github.com/wkpark/uddetailer) or [adetailer](https://github.com/Bing-su/adetailer) implementations.
2 |
3 | # Detection Detailer
4 | An object detection and auto-mask extension for [Stable Diffusion web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui). See [Installation](https://github.com/dustysys/ddetailer#installation).
5 |
6 | 
7 |
8 | ### Segmentation
9 | Default models enable person and face instance segmentation.
10 |
11 | 
12 |
13 | ### Detailing
14 | With full-resolution inpainting, the extension is handy for improving faces without the hassle of manual masking.
15 |
16 | 
17 |
18 | ## Installation
19 | 1. Use `git clone https://github.com/dustysys/ddetailer.git` from your SD web UI `/extensions` folder. Alternatively, install from the extensions tab with url `https://github.com/dustysys/ddetailer`
20 | 2. Start or reload SD web UI.
21 |
22 | The models and dependencies should download automatically. To install them manually, follow the [official instructions for installing mmdet](https://mmcv.readthedocs.io/en/latest/get_started/installation.html#install-with-mim-recommended). The models can be [downloaded here](https://huggingface.co/dustysys/ddetailer) and should be placed in `/models/mmdet/bbox` for bounding box (`anime-face_yolov3`) or `/models/mmdet/segm` for instance segmentation models (`dd-person_mask2former`). See the [MMDetection docs](https://mmdetection.readthedocs.io/en/latest/1_exist_data_model.html) for guidance on training your own models. For using official MMDetection pretrained models see [here](https://github.com/dustysys/ddetailer/issues/5#issuecomment-1311231989), these are trained for photorealism. See [Troubleshooting](https://github.com/dustysys/ddetailer#troubleshooting) if you encounter issues during installation.
23 |
24 | ## Usage
25 | Select Detection Detailer as the script in SD web UI to use the extension. Click 'Generate' to run the script. Here are some tips:
26 | - `anime-face_yolov3` can detect the bounding box of faces as the primary model while `dd-person_mask2former` isolates the head's silhouette as the secondary model by using the bitwise AND option. Refer to [this example](https://github.com/dustysys/ddetailer/issues/4#issuecomment-1311200268).
27 | - The dilation factor expands the mask, while the x & y offsets move the mask around.
28 | - The script is available in txt2img mode as well and can improve the quality of your 10 pulls with moderate settings (low denoise).
29 |
30 | ## Troubleshooting
31 | If you get the message ERROR: 'Failed building wheel for pycocotools' follow [these steps](https://github.com/dustysys/ddetailer/issues/1#issuecomment-1309415543).
32 |
33 | Any other issues installing, open an [issue](https://github.com/dustysys/ddetailer/issues).
34 |
35 | ## Credits
36 | hysts/[anime-face-detector](https://github.com/hysts/anime-face-detector) - Creator of `anime-face_yolov3`, which has impressive performance on a variety of art styles.
37 |
38 | skytnt/[anime-segmentation](https://huggingface.co/datasets/skytnt/anime-segmentation) - Synthetic dataset used to train `dd-person_mask2former`.
39 |
40 | jerryli27/[AniSeg](https://github.com/jerryli27/AniSeg) - Annotated dataset used to train `dd-person_mask2former`.
41 |
42 | open-mmlab/[mmdetection](https://github.com/open-mmlab/mmdetection) - Object detection toolset. `dd-person_mask2former` was trained via transfer learning using their [R-50 Mask2Former instance segmentation model](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask2former#instance-segmentation) as a base.
43 |
44 | AUTOMATIC1111/[stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - Web UI for Stable Diffusion, base application for this extension.
45 |
--------------------------------------------------------------------------------
/misc/ddetailer_example_1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dustysys/ddetailer/08b06d8bc59a1ce5e6eec941a7c81c15b5163078/misc/ddetailer_example_1.png
--------------------------------------------------------------------------------
/misc/ddetailer_example_2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dustysys/ddetailer/08b06d8bc59a1ce5e6eec941a7c81c15b5163078/misc/ddetailer_example_2.png
--------------------------------------------------------------------------------
/misc/ddetailer_example_3.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dustysys/ddetailer/08b06d8bc59a1ce5e6eec941a7c81c15b5163078/misc/ddetailer_example_3.gif
--------------------------------------------------------------------------------
/scripts/ddetailer.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import cv2
4 | from PIL import Image
5 | import numpy as np
6 | import gradio as gr
7 |
8 | from modules import processing, images
9 | from modules import scripts, script_callbacks, shared, devices, modelloader
10 | from modules.processing import Processed, StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img
11 | from modules.shared import opts, cmd_opts, state
12 | from modules.sd_models import model_hash
13 | from modules.paths import models_path
14 | from basicsr.utils.download_util import load_file_from_url
15 |
16 | dd_models_path = os.path.join(models_path, "mmdet")
17 |
18 | def list_models(model_path):
19 | model_list = modelloader.load_models(model_path=model_path, ext_filter=[".pth"])
20 |
21 | def modeltitle(path, shorthash):
22 | abspath = os.path.abspath(path)
23 |
24 | if abspath.startswith(model_path):
25 | name = abspath.replace(model_path, '')
26 | else:
27 | name = os.path.basename(path)
28 |
29 | if name.startswith("\\") or name.startswith("/"):
30 | name = name[1:]
31 |
32 | shortname = os.path.splitext(name.replace("/", "_").replace("\\", "_"))[0]
33 |
34 | return f'{name} [{shorthash}]', shortname
35 |
36 | models = []
37 | for filename in model_list:
38 | h = model_hash(filename)
39 | title, short_model_name = modeltitle(filename, h)
40 | models.append(title)
41 |
42 | return models
43 |
44 | def startup():
45 | from launch import is_installed, run
46 | if not is_installed("mmdet"):
47 | python = sys.executable
48 | run(f'"{python}" -m pip install -U openmim', desc="Installing openmim", errdesc="Couldn't install openmim")
49 | run(f'"{python}" -m mim install mmcv-full', desc=f"Installing mmcv-full", errdesc=f"Couldn't install mmcv-full")
50 | run(f'"{python}" -m pip install mmdet', desc=f"Installing mmdet", errdesc=f"Couldn't install mmdet")
51 |
52 | if (len(list_models(dd_models_path)) == 0):
53 | print("No detection models found, downloading...")
54 | bbox_path = os.path.join(dd_models_path, "bbox")
55 | segm_path = os.path.join(dd_models_path, "segm")
56 | load_file_from_url("https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/bbox/mmdet_anime-face_yolov3.pth", bbox_path)
57 | load_file_from_url("https://huggingface.co/dustysys/ddetailer/raw/main/mmdet/bbox/mmdet_anime-face_yolov3.py", bbox_path)
58 | load_file_from_url("https://huggingface.co/dustysys/ddetailer/resolve/main/mmdet/segm/mmdet_dd-person_mask2former.pth", segm_path)
59 | load_file_from_url("https://huggingface.co/dustysys/ddetailer/raw/main/mmdet/segm/mmdet_dd-person_mask2former.py", segm_path)
60 |
61 | startup()
62 |
63 | def gr_show(visible=True):
64 | return {"visible": visible, "__type__": "update"}
65 |
66 | class DetectionDetailerScript(scripts.Script):
67 | def title(self):
68 | return "Detection Detailer"
69 |
70 | def show(self, is_img2img):
71 | return True
72 |
73 | def ui(self, is_img2img):
74 | import modules.ui
75 |
76 | model_list = list_models(dd_models_path)
77 | model_list.insert(0, "None")
78 | if is_img2img:
79 | info = gr.HTML("
Recommended settings: Use from inpaint tab, inpaint at full res ON, denoise <0.5
")
80 | else:
81 | info = gr.HTML("")
82 | with gr.Group():
83 | with gr.Row():
84 | dd_model_a = gr.Dropdown(label="Primary detection model (A)", choices=model_list,value = "None", visible=True, type="value")
85 |
86 | with gr.Row():
87 | dd_conf_a = gr.Slider(label='Detection confidence threshold % (A)', minimum=0, maximum=100, step=1, value=30, visible=False)
88 | dd_dilation_factor_a = gr.Slider(label='Dilation factor (A)', minimum=0, maximum=255, step=1, value=4, visible=False)
89 |
90 | with gr.Row():
91 | dd_offset_x_a = gr.Slider(label='X offset (A)', minimum=-200, maximum=200, step=1, value=0, visible=False)
92 | dd_offset_y_a = gr.Slider(label='Y offset (A)', minimum=-200, maximum=200, step=1, value=0, visible=False)
93 |
94 | with gr.Row():
95 | dd_preprocess_b = gr.Checkbox(label='Inpaint model B detections before model A runs', value=False, visible=False)
96 | dd_bitwise_op = gr.Radio(label='Bitwise operation', choices=['None', 'A&B', 'A-B'], value="None", visible=False)
97 |
98 | br = gr.HTML("
")
99 |
100 | with gr.Group():
101 | with gr.Row():
102 | dd_model_b = gr.Dropdown(label="Secondary detection model (B) (optional)", choices=model_list,value = "None", visible =False, type="value")
103 |
104 | with gr.Row():
105 | dd_conf_b = gr.Slider(label='Detection confidence threshold % (B)', minimum=0, maximum=100, step=1, value=30, visible=False)
106 | dd_dilation_factor_b = gr.Slider(label='Dilation factor (B)', minimum=0, maximum=255, step=1, value=4, visible=False)
107 |
108 | with gr.Row():
109 | dd_offset_x_b = gr.Slider(label='X offset (B)', minimum=-200, maximum=200, step=1, value=0, visible=False)
110 | dd_offset_y_b = gr.Slider(label='Y offset (B)', minimum=-200, maximum=200, step=1, value=0, visible=False)
111 |
112 | with gr.Group():
113 | with gr.Row():
114 | dd_mask_blur = gr.Slider(label='Mask blur ', minimum=0, maximum=64, step=1, value=4, visible=(not is_img2img))
115 | dd_denoising_strength = gr.Slider(label='Denoising strength (Inpaint)', minimum=0.0, maximum=1.0, step=0.01, value=0.4, visible=(not is_img2img))
116 |
117 | with gr.Row():
118 | dd_inpaint_full_res = gr.Checkbox(label='Inpaint at full resolution ', value=True, visible = (not is_img2img))
119 | dd_inpaint_full_res_padding = gr.Slider(label='Inpaint at full resolution padding, pixels ', minimum=0, maximum=256, step=4, value=32, visible=(not is_img2img))
120 |
121 | dd_model_a.change(
122 | lambda modelname: {
123 | dd_model_b:gr_show( modelname != "None" ),
124 | dd_conf_a:gr_show( modelname != "None" ),
125 | dd_dilation_factor_a:gr_show( modelname != "None"),
126 | dd_offset_x_a:gr_show( modelname != "None" ),
127 | dd_offset_y_a:gr_show( modelname != "None" )
128 |
129 | },
130 | inputs= [dd_model_a],
131 | outputs =[dd_model_b, dd_conf_a, dd_dilation_factor_a, dd_offset_x_a, dd_offset_y_a]
132 | )
133 |
134 | dd_model_b.change(
135 | lambda modelname: {
136 | dd_preprocess_b:gr_show( modelname != "None" ),
137 | dd_bitwise_op:gr_show( modelname != "None" ),
138 | dd_conf_b:gr_show( modelname != "None" ),
139 | dd_dilation_factor_b:gr_show( modelname != "None"),
140 | dd_offset_x_b:gr_show( modelname != "None" ),
141 | dd_offset_y_b:gr_show( modelname != "None" )
142 | },
143 | inputs= [dd_model_b],
144 | outputs =[dd_preprocess_b, dd_bitwise_op, dd_conf_b, dd_dilation_factor_b, dd_offset_x_b, dd_offset_y_b]
145 | )
146 |
147 | return [info,
148 | dd_model_a,
149 | dd_conf_a, dd_dilation_factor_a,
150 | dd_offset_x_a, dd_offset_y_a,
151 | dd_preprocess_b, dd_bitwise_op,
152 | br,
153 | dd_model_b,
154 | dd_conf_b, dd_dilation_factor_b,
155 | dd_offset_x_b, dd_offset_y_b,
156 | dd_mask_blur, dd_denoising_strength,
157 | dd_inpaint_full_res, dd_inpaint_full_res_padding
158 | ]
159 |
160 | def run(self, p, info,
161 | dd_model_a,
162 | dd_conf_a, dd_dilation_factor_a,
163 | dd_offset_x_a, dd_offset_y_a,
164 | dd_preprocess_b, dd_bitwise_op,
165 | br,
166 | dd_model_b,
167 | dd_conf_b, dd_dilation_factor_b,
168 | dd_offset_x_b, dd_offset_y_b,
169 | dd_mask_blur, dd_denoising_strength,
170 | dd_inpaint_full_res, dd_inpaint_full_res_padding):
171 |
172 | processing.fix_seed(p)
173 | initial_info = None
174 | seed = p.seed
175 | p.batch_size = 1
176 | ddetail_count = p.n_iter
177 | p.n_iter = 1
178 | p.do_not_save_grid = True
179 | p.do_not_save_samples = True
180 | is_txt2img = isinstance(p, StableDiffusionProcessingTxt2Img)
181 | if (not is_txt2img):
182 | orig_image = p.init_images[0]
183 | else:
184 | p_txt = p
185 | p = StableDiffusionProcessingImg2Img(
186 | init_images = None,
187 | resize_mode = 0,
188 | denoising_strength = dd_denoising_strength,
189 | mask = None,
190 | mask_blur= dd_mask_blur,
191 | inpainting_fill = 1,
192 | inpaint_full_res = dd_inpaint_full_res,
193 | inpaint_full_res_padding= dd_inpaint_full_res_padding,
194 | inpainting_mask_invert= 0,
195 | sd_model=p_txt.sd_model,
196 | outpath_samples=p_txt.outpath_samples,
197 | outpath_grids=p_txt.outpath_grids,
198 | prompt=p_txt.prompt,
199 | negative_prompt=p_txt.negative_prompt,
200 | styles=p_txt.styles,
201 | seed=p_txt.seed,
202 | subseed=p_txt.subseed,
203 | subseed_strength=p_txt.subseed_strength,
204 | seed_resize_from_h=p_txt.seed_resize_from_h,
205 | seed_resize_from_w=p_txt.seed_resize_from_w,
206 | sampler_name=p_txt.sampler_name,
207 | n_iter=p_txt.n_iter,
208 | steps=p_txt.steps,
209 | cfg_scale=p_txt.cfg_scale,
210 | width=p_txt.width,
211 | height=p_txt.height,
212 | tiling=p_txt.tiling,
213 | )
214 | p.do_not_save_grid = True
215 | p.do_not_save_samples = True
216 | output_images = []
217 | state.job_count = ddetail_count
218 | for n in range(ddetail_count):
219 | devices.torch_gc()
220 | start_seed = seed + n
221 | if ( is_txt2img ):
222 | print(f"Processing initial image for output generation {n + 1}.")
223 | p_txt.seed = start_seed
224 | processed = processing.process_images(p_txt)
225 | init_image = processed.images[0]
226 | else:
227 | init_image = orig_image
228 |
229 | output_images.append(init_image)
230 | masks_a = []
231 | masks_b_pre = []
232 |
233 | # Optional secondary pre-processing run
234 | if (dd_model_b != "None" and dd_preprocess_b):
235 | label_b_pre = "B"
236 | results_b_pre = inference(init_image, dd_model_b, dd_conf_b/100.0, label_b_pre)
237 | masks_b_pre = create_segmasks(results_b_pre)
238 | masks_b_pre = dilate_masks(masks_b_pre, dd_dilation_factor_b, 1)
239 | masks_b_pre = offset_masks(masks_b_pre,dd_offset_x_b, dd_offset_y_b)
240 | if (len(masks_b_pre) > 0):
241 | results_b_pre = update_result_masks(results_b_pre, masks_b_pre)
242 | segmask_preview_b = create_segmask_preview(results_b_pre, init_image)
243 | shared.state.current_image = segmask_preview_b
244 | if ( opts.dd_save_previews):
245 | images.save_image(segmask_preview_b, opts.outdir_ddetailer_previews, "", start_seed, p.prompt, opts.samples_format, p=p)
246 | gen_count = len(masks_b_pre)
247 | state.job_count += gen_count
248 | print(f"Processing {gen_count} model {label_b_pre} detections for output generation {n + 1}.")
249 | p.seed = start_seed
250 | p.init_images = [init_image]
251 |
252 | for i in range(gen_count):
253 | p.image_mask = masks_b_pre[i]
254 | if ( opts.dd_save_masks):
255 | images.save_image(masks_b_pre[i], opts.outdir_ddetailer_masks, "", start_seed, p.prompt, opts.samples_format, p=p)
256 | processed = processing.process_images(p)
257 | p.seed = processed.seed + 1
258 | p.init_images = processed.images
259 |
260 | if (gen_count > 0):
261 | output_images[n] = processed.images[0]
262 | init_image = processed.images[0]
263 |
264 | else:
265 | print(f"No model B detections for output generation {n} with current settings.")
266 |
267 | # Primary run
268 | if (dd_model_a != "None"):
269 | label_a = "A"
270 | if (dd_model_b != "None" and dd_bitwise_op != "None"):
271 | label_a = dd_bitwise_op
272 | results_a = inference(init_image, dd_model_a, dd_conf_a/100.0, label_a)
273 | masks_a = create_segmasks(results_a)
274 | masks_a = dilate_masks(masks_a, dd_dilation_factor_a, 1)
275 | masks_a = offset_masks(masks_a,dd_offset_x_a, dd_offset_y_a)
276 | if (dd_model_b != "None" and dd_bitwise_op != "None"):
277 | label_b = "B"
278 | results_b = inference(init_image, dd_model_b, dd_conf_b/100.0, label_b)
279 | masks_b = create_segmasks(results_b)
280 | masks_b = dilate_masks(masks_b, dd_dilation_factor_b, 1)
281 | masks_b = offset_masks(masks_b,dd_offset_x_b, dd_offset_y_b)
282 | if (len(masks_b) > 0):
283 | combined_mask_b = combine_masks(masks_b)
284 | for i in reversed(range(len(masks_a))):
285 | if (dd_bitwise_op == "A&B"):
286 | masks_a[i] = bitwise_and_masks(masks_a[i], combined_mask_b)
287 | elif (dd_bitwise_op == "A-B"):
288 | masks_a[i] = subtract_masks(masks_a[i], combined_mask_b)
289 | if (is_allblack(masks_a[i])):
290 | del masks_a[i]
291 | for result in results_a:
292 | del result[i]
293 |
294 | else:
295 | print("No model B detections to overlap with model A masks")
296 | results_a = []
297 | masks_a = []
298 |
299 | if (len(masks_a) > 0):
300 | results_a = update_result_masks(results_a, masks_a)
301 | segmask_preview_a = create_segmask_preview(results_a, init_image)
302 | shared.state.current_image = segmask_preview_a
303 | if ( opts.dd_save_previews):
304 | images.save_image(segmask_preview_a, opts.outdir_ddetailer_previews, "", start_seed, p.prompt, opts.samples_format, p=p)
305 | gen_count = len(masks_a)
306 | state.job_count += gen_count
307 | print(f"Processing {gen_count} model {label_a} detections for output generation {n + 1}.")
308 | p.seed = start_seed
309 | p.init_images = [init_image]
310 |
311 | for i in range(gen_count):
312 | p.image_mask = masks_a[i]
313 | if ( opts.dd_save_masks):
314 | images.save_image(masks_a[i], opts.outdir_ddetailer_masks, "", start_seed, p.prompt, opts.samples_format, p=p)
315 |
316 | processed = processing.process_images(p)
317 | if initial_info is None:
318 | initial_info = processed.info
319 | p.seed = processed.seed + 1
320 | p.init_images = processed.images
321 |
322 | if (gen_count > 0):
323 | output_images[n] = processed.images[0]
324 | if ( opts.samples_save ):
325 | images.save_image(processed.images[0], p.outpath_samples, "", start_seed, p.prompt, opts.samples_format, info=initial_info, p=p)
326 |
327 | else:
328 | print(f"No model {label_a} detections for output generation {n} with current settings.")
329 | state.job = f"Generation {n + 1} out of {state.job_count}"
330 | if (initial_info is None):
331 | initial_info = "No detections found."
332 |
333 | return Processed(p, output_images, seed, initial_info)
334 |
335 | def modeldataset(model_shortname):
336 | path = modelpath(model_shortname)
337 | if ("mmdet" in path and "segm" in path):
338 | dataset = 'coco'
339 | else:
340 | dataset = 'bbox'
341 | return dataset
342 |
343 | def modelpath(model_shortname):
344 | model_list = modelloader.load_models(model_path=dd_models_path, ext_filter=[".pth"])
345 | model_h = model_shortname.split("[")[-1].split("]")[0]
346 | for path in model_list:
347 | if ( model_hash(path) == model_h):
348 | return path
349 |
350 | def update_result_masks(results, masks):
351 | for i in range(len(masks)):
352 | boolmask = np.array(masks[i], dtype=bool)
353 | results[2][i] = boolmask
354 | return results
355 |
356 | def create_segmask_preview(results, image):
357 | labels = results[0]
358 | bboxes = results[1]
359 | segms = results[2]
360 |
361 | cv2_image = np.array(image)
362 | cv2_image = cv2_image[:, :, ::-1].copy()
363 |
364 | for i in range(len(segms)):
365 | color = np.full_like(cv2_image, np.random.randint(100, 256, (1, 3), dtype=np.uint8))
366 | alpha = 0.2
367 | color_image = cv2.addWeighted(cv2_image, alpha, color, 1-alpha, 0)
368 | cv2_mask = segms[i].astype(np.uint8) * 255
369 | cv2_mask_bool = np.array(segms[i], dtype=bool)
370 | centroid = np.mean(np.argwhere(cv2_mask_bool),axis=0)
371 | centroid_x, centroid_y = int(centroid[1]), int(centroid[0])
372 |
373 | cv2_mask_rgb = cv2.merge((cv2_mask, cv2_mask, cv2_mask))
374 | cv2_image = np.where(cv2_mask_rgb == 255, color_image, cv2_image)
375 | text_color = tuple([int(x) for x in ( color[0][0] - 100 )])
376 | name = labels[i]
377 | score = bboxes[i][4]
378 | score = str(score)[:4]
379 | text = name + ":" + score
380 | cv2.putText(cv2_image, text, (centroid_x - 30, centroid_y), cv2.FONT_HERSHEY_DUPLEX, 0.4, text_color, 1, cv2.LINE_AA)
381 |
382 | if ( len(segms) > 0):
383 | preview_image = Image.fromarray(cv2.cvtColor(cv2_image, cv2.COLOR_BGR2RGB))
384 | else:
385 | preview_image = image
386 |
387 | return preview_image
388 |
389 | def is_allblack(mask):
390 | cv2_mask = np.array(mask)
391 | return cv2.countNonZero(cv2_mask) == 0
392 |
393 | def bitwise_and_masks(mask1, mask2):
394 | cv2_mask1 = np.array(mask1)
395 | cv2_mask2 = np.array(mask2)
396 | cv2_mask = cv2.bitwise_and(cv2_mask1, cv2_mask2)
397 | mask = Image.fromarray(cv2_mask)
398 | return mask
399 |
400 | def subtract_masks(mask1, mask2):
401 | cv2_mask1 = np.array(mask1)
402 | cv2_mask2 = np.array(mask2)
403 | cv2_mask = cv2.subtract(cv2_mask1, cv2_mask2)
404 | mask = Image.fromarray(cv2_mask)
405 | return mask
406 |
407 | def dilate_masks(masks, dilation_factor, iter=1):
408 | if dilation_factor == 0:
409 | return masks
410 | dilated_masks = []
411 | kernel = np.ones((dilation_factor,dilation_factor), np.uint8)
412 | for i in range(len(masks)):
413 | cv2_mask = np.array(masks[i])
414 | dilated_mask = cv2.dilate(cv2_mask, kernel, iter)
415 | dilated_masks.append(Image.fromarray(dilated_mask))
416 | return dilated_masks
417 |
418 | def offset_masks(masks, offset_x, offset_y):
419 | if (offset_x == 0 and offset_y == 0):
420 | return masks
421 | offset_masks = []
422 | for i in range(len(masks)):
423 | cv2_mask = np.array(masks[i])
424 | offset_mask = cv2_mask.copy()
425 | offset_mask = np.roll(offset_mask, -offset_y, axis=0)
426 | offset_mask = np.roll(offset_mask, offset_x, axis=1)
427 |
428 | offset_masks.append(Image.fromarray(offset_mask))
429 | return offset_masks
430 |
431 | def combine_masks(masks):
432 | initial_cv2_mask = np.array(masks[0])
433 | combined_cv2_mask = initial_cv2_mask
434 | for i in range(1, len(masks)):
435 | cv2_mask = np.array(masks[i])
436 | combined_cv2_mask = cv2.bitwise_or(combined_cv2_mask, cv2_mask)
437 |
438 | combined_mask = Image.fromarray(combined_cv2_mask)
439 | return combined_mask
440 |
441 | def on_ui_settings():
442 | shared.opts.add_option("dd_save_previews", shared.OptionInfo(False, "Save mask previews", section=("ddetailer", "Detection Detailer")))
443 | shared.opts.add_option("outdir_ddetailer_previews", shared.OptionInfo("extensions/ddetailer/outputs/masks-previews", 'Output directory for mask previews', section=("ddetailer", "Detection Detailer")))
444 | shared.opts.add_option("dd_save_masks", shared.OptionInfo(False, "Save masks", section=("ddetailer", "Detection Detailer")))
445 | shared.opts.add_option("outdir_ddetailer_masks", shared.OptionInfo("extensions/ddetailer/outputs/masks", 'Output directory for masks', section=("ddetailer", "Detection Detailer")))
446 |
447 | def create_segmasks(results):
448 | segms = results[2]
449 | segmasks = []
450 | for i in range(len(segms)):
451 | cv2_mask = segms[i].astype(np.uint8) * 255
452 | mask = Image.fromarray(cv2_mask)
453 | segmasks.append(mask)
454 |
455 | return segmasks
456 |
457 | import mmcv
458 | from mmdet.core import get_classes
459 | from mmdet.apis import (inference_detector,
460 | init_detector)
461 |
462 | def get_device():
463 | device_id = shared.cmd_opts.device_id
464 | if device_id is not None:
465 | cuda_device = f"cuda:{device_id}"
466 | else:
467 | cuda_device = "cpu"
468 | return cuda_device
469 |
470 | def inference(image, modelname, conf_thres, label):
471 | path = modelpath(modelname)
472 | if ( "mmdet" in path and "bbox" in path ):
473 | results = inference_mmdet_bbox(image, modelname, conf_thres, label)
474 | elif ( "mmdet" in path and "segm" in path):
475 | results = inference_mmdet_segm(image, modelname, conf_thres, label)
476 | return results
477 |
478 | def inference_mmdet_segm(image, modelname, conf_thres, label):
479 | model_checkpoint = modelpath(modelname)
480 | model_config = os.path.splitext(model_checkpoint)[0] + ".py"
481 | model_device = get_device()
482 | model = init_detector(model_config, model_checkpoint, device=model_device)
483 | mmdet_results = inference_detector(model, np.array(image))
484 | bbox_results, segm_results = mmdet_results
485 | dataset = modeldataset(modelname)
486 | classes = get_classes(dataset)
487 | labels = [
488 | np.full(bbox.shape[0], i, dtype=np.int32)
489 | for i, bbox in enumerate(bbox_results)
490 | ]
491 | n,m = bbox_results[0].shape
492 | if (n == 0):
493 | return [[],[],[]]
494 | labels = np.concatenate(labels)
495 | bboxes = np.vstack(bbox_results)
496 | segms = mmcv.concat_list(segm_results)
497 | filter_inds = np.where(bboxes[:,-1] > conf_thres)[0]
498 | results = [[],[],[]]
499 | for i in filter_inds:
500 | results[0].append(label + "-" + classes[labels[i]])
501 | results[1].append(bboxes[i])
502 | results[2].append(segms[i])
503 |
504 | return results
505 |
506 | def inference_mmdet_bbox(image, modelname, conf_thres, label):
507 | model_checkpoint = modelpath(modelname)
508 | model_config = os.path.splitext(model_checkpoint)[0] + ".py"
509 | model_device = get_device()
510 | model = init_detector(model_config, model_checkpoint, device=model_device)
511 | results = inference_detector(model, np.array(image))
512 | cv2_image = np.array(image)
513 | cv2_image = cv2_image[:, :, ::-1].copy()
514 | cv2_gray = cv2.cvtColor(cv2_image, cv2.COLOR_BGR2GRAY)
515 |
516 | segms = []
517 | for (x0, y0, x1, y1, conf) in results[0]:
518 | cv2_mask = np.zeros((cv2_gray.shape), np.uint8)
519 | cv2.rectangle(cv2_mask, (int(x0), int(y0)), (int(x1), int(y1)), 255, -1)
520 | cv2_mask_bool = cv2_mask.astype(bool)
521 | segms.append(cv2_mask_bool)
522 |
523 | n,m = results[0].shape
524 | if (n == 0):
525 | return [[],[],[]]
526 | bboxes = np.vstack(results[0])
527 | filter_inds = np.where(bboxes[:,-1] > conf_thres)[0]
528 | results = [[],[],[]]
529 | for i in filter_inds:
530 | results[0].append(label)
531 | results[1].append(bboxes[i])
532 | results[2].append(segms[i])
533 |
534 | return results
535 |
536 | script_callbacks.on_ui_settings(on_ui_settings)
537 |
--------------------------------------------------------------------------------