├── .github └── workflows │ └── publish.yml ├── LICENSE ├── README.md ├── __init__.py ├── canvas_node.py ├── js ├── Canvas.js └── Canvas_view.js ├── pyproject.toml └── requirements.txt /.github/workflows/publish.yml: -------------------------------------------------------------------------------- 1 | name: Publish to Comfy registry 2 | on: 3 | workflow_dispatch: 4 | push: 5 | branches: 6 | - main 7 | - master 8 | paths: 9 | - "pyproject.toml" 10 | 11 | jobs: 12 | publish-node: 13 | name: Publish Custom Node to registry 14 | runs-on: ubuntu-latest 15 | # if this is a forked repository. Skipping the workflow. 16 | if: github.event.repository.fork == false 17 | steps: 18 | - name: Check out code 19 | uses: actions/checkout@v4 20 | - name: Publish Custom Node 21 | uses: Comfy-Org/publish-node-action@main 22 | with: 23 | ## Add your own personal access token to your Github Repository secrets and reference it here. 24 | personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }} 25 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 tanglup 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Comfyui-Ycnode 2 | **Canvas Node** 3 | 4 | **1**. Basic Operations: 5 | Select Image: Click on the image with the mouse to select it. 6 | Deselect Image: Double-click on the image or click on an empty area to deselect it. 7 | 8 | **2**. Zooming Image: 9 | Zoom In/Out: With the image selected, scroll the mouse wheel up/down to zoom in/out. 10 | 11 | **3**. Rotating Image: 12 | hold down the Shift key and scroll the mouse wheel up/down to rotate the image clockwise/counterclockwise. 13 | 14 | **4**. When you press ATL, you can stretch or compress the image by dragging the image with the mouse. 15 | 16 | **5**. matting,The button: is the cutout function 17 | 18 | **6**. Other functions: 19 | All on the button, quite obvious 20 | 21 | **Revise the canvas size and drag the node interface box to refresh it.** 22 | 23 | Generally, the model is downloaded automatically. If it doesn't work, you can use the following 24 | Manual placement 25 | 26 | Model Name: models--ZhengPeng7--BiRefNet 27 | 28 | The cloud disk link is as follows 29 | 30 | baidu Link:https://pan.baidu.com/s/1PiZvuHcdlcZGoL7WDYnMkA?pwd=nt76 31 | google link: https://drive.google.com/drive/folders/1BCLInCLH89fmTpYoP8Sgs_Eqww28f_wq?usp=sharing 32 | 33 | Place it in: models/BiRefNet 34 | 35 | 2024/11/24 Updated Features: 36 | Add input images and masks; add blending mode options for images in the canvas (you can select them by selecting the image and then shift+clicking the image to pop up the menu) 37 | Note: The output blending mode does not change, and needs to be updated by slightly changing the canvas content 38 | ![1732416209647](https://github.com/user-attachments/assets/befb83a6-44aa-436f-9e11-614e4090a8d9) 39 | 40 | 41 | **Example diagram of the process of canvas node, matting, layout, and output** 42 | ![1732110001138](https://github.com/user-attachments/assets/372a14a5-8255-4768-9547-d8a6083bb76c) 43 | 44 | 45 | **press the button of matting can matting the image in the canvas, and then continue to edit the image in the canvass** 46 | ![image](https://github.com/user-attachments/assets/faa4156c-e511-4c8c-9165-1a139fb8c894) 47 | 48 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- 1 | from .canvas_node import CanvasNode 2 | 3 | # 设置路由 4 | CanvasNode.setup_routes() 5 | 6 | NODE_CLASS_MAPPINGS = { 7 | "CanvasNode": CanvasNode 8 | } 9 | 10 | NODE_DISPLAY_NAME_MAPPINGS = { 11 | "CanvasNode": "Canvas Node" 12 | } 13 | 14 | WEB_DIRECTORY = "./js" 15 | 16 | __all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS", "WEB_DIRECTORY"] 17 | -------------------------------------------------------------------------------- /canvas_node.py: -------------------------------------------------------------------------------- 1 | from PIL import Image, ImageOps 2 | import hashlib 3 | import torch 4 | import numpy as np 5 | import folder_paths 6 | from server import PromptServer 7 | from aiohttp import web 8 | import os 9 | from tqdm import tqdm 10 | from torchvision import transforms 11 | from transformers import AutoModelForImageSegmentation, PretrainedConfig 12 | import torch.nn.functional as F 13 | import traceback 14 | import uuid 15 | import time 16 | import base64 17 | from PIL import Image 18 | import io 19 | 20 | # 设置高精度计算 21 | torch.set_float32_matmul_precision('high') 22 | 23 | # 定义配置类 24 | class BiRefNetConfig(PretrainedConfig): 25 | model_type = "BiRefNet" 26 | def __init__(self, bb_pretrained=False, **kwargs): 27 | self.bb_pretrained = bb_pretrained 28 | super().__init__(**kwargs) 29 | 30 | # 定义模型类 31 | class BiRefNet(torch.nn.Module): 32 | def __init__(self, config): 33 | super().__init__() 34 | # 基本网络结构 35 | self.encoder = torch.nn.Sequential( 36 | torch.nn.Conv2d(3, 64, kernel_size=3, padding=1), 37 | torch.nn.ReLU(inplace=True), 38 | torch.nn.Conv2d(64, 64, kernel_size=3, padding=1), 39 | torch.nn.ReLU(inplace=True) 40 | ) 41 | 42 | self.decoder = torch.nn.Sequential( 43 | torch.nn.Conv2d(64, 32, kernel_size=3, padding=1), 44 | torch.nn.ReLU(inplace=True), 45 | torch.nn.Conv2d(32, 1, kernel_size=1) 46 | ) 47 | 48 | def forward(self, x): 49 | features = self.encoder(x) 50 | output = self.decoder(features) 51 | return [output] 52 | 53 | class CanvasNode: 54 | _canvas_cache = { 55 | 'image': None, 56 | 'mask': None, 57 | 'cache_enabled': True, 58 | 'data_flow_status': {}, 59 | 'persistent_cache': {}, 60 | 'last_execution_id': None 61 | } 62 | 63 | def __init__(self): 64 | super().__init__() 65 | self.flow_id = str(uuid.uuid4()) 66 | # 从持久化缓存恢复数据 67 | if self.__class__._canvas_cache['persistent_cache']: 68 | self.restore_cache() 69 | 70 | def restore_cache(self): 71 | """从持久化缓存恢复数据,除非是新的执行""" 72 | try: 73 | persistent = self.__class__._canvas_cache['persistent_cache'] 74 | current_execution = self.get_execution_id() 75 | 76 | # 只有在新的执行ID时才清除缓存 77 | if current_execution != self.__class__._canvas_cache['last_execution_id']: 78 | print(f"New execution detected: {current_execution}") 79 | self.__class__._canvas_cache['image'] = None 80 | self.__class__._canvas_cache['mask'] = None 81 | self.__class__._canvas_cache['last_execution_id'] = current_execution 82 | else: 83 | # 否则保留现有缓存 84 | if persistent.get('image') is not None: 85 | self.__class__._canvas_cache['image'] = persistent['image'] 86 | print("Restored image from persistent cache") 87 | if persistent.get('mask') is not None: 88 | self.__class__._canvas_cache['mask'] = persistent['mask'] 89 | print("Restored mask from persistent cache") 90 | except Exception as e: 91 | print(f"Error restoring cache: {str(e)}") 92 | 93 | def get_execution_id(self): 94 | """获取当前工作流执行ID""" 95 | try: 96 | # 可以使用时间戳或其他唯一标识 97 | return str(int(time.time() * 1000)) 98 | except Exception as e: 99 | print(f"Error getting execution ID: {str(e)}") 100 | return None 101 | 102 | def update_persistent_cache(self): 103 | """更新持久化缓存""" 104 | try: 105 | self.__class__._canvas_cache['persistent_cache'] = { 106 | 'image': self.__class__._canvas_cache['image'], 107 | 'mask': self.__class__._canvas_cache['mask'] 108 | } 109 | print("Updated persistent cache") 110 | except Exception as e: 111 | print(f"Error updating persistent cache: {str(e)}") 112 | 113 | def track_data_flow(self, stage, status, data_info=None): 114 | """追踪数据流状态""" 115 | flow_status = { 116 | 'timestamp': time.time(), 117 | 'stage': stage, 118 | 'status': status, 119 | 'data_info': data_info 120 | } 121 | print(f"Data Flow [{self.flow_id}] - Stage: {stage}, Status: {status}") 122 | if data_info: 123 | print(f"Data Info: {data_info}") 124 | 125 | self.__class__._canvas_cache['data_flow_status'][self.flow_id] = flow_status 126 | 127 | @classmethod 128 | def INPUT_TYPES(cls): 129 | return { 130 | "required": { 131 | "canvas_image": ("STRING", {"default": "canvas_image.png"}), 132 | "trigger": ("INT", {"default": 0, "min": 0, "max": 99999999, "step": 1, "hidden": True}), 133 | "output_switch": ("BOOLEAN", {"default": True}), 134 | "cache_enabled": ("BOOLEAN", {"default": True, "label": "Enable Cache"}) 135 | }, 136 | "optional": { 137 | "input_image": ("IMAGE",), 138 | "input_mask": ("MASK",) 139 | } 140 | } 141 | 142 | RETURN_TYPES = ("IMAGE", "MASK") 143 | RETURN_NAMES = ("image", "mask") 144 | FUNCTION = "process_canvas_image" 145 | CATEGORY = "Ycanvas" 146 | 147 | def add_image_to_canvas(self, input_image): 148 | """处理输入图像""" 149 | try: 150 | # 确保输入图像是正确的格式 151 | if not isinstance(input_image, torch.Tensor): 152 | raise ValueError("Input image must be a torch.Tensor") 153 | 154 | # 处理图像维度 155 | if input_image.dim() == 4: 156 | input_image = input_image.squeeze(0) 157 | 158 | # 转换为标准格式 159 | if input_image.dim() == 3 and input_image.shape[0] in [1, 3]: 160 | input_image = input_image.permute(1, 2, 0) 161 | 162 | return input_image 163 | 164 | except Exception as e: 165 | print(f"Error in add_image_to_canvas: {str(e)}") 166 | return None 167 | 168 | def add_mask_to_canvas(self, input_mask, input_image): 169 | """处理输入遮罩""" 170 | try: 171 | # 确保输入遮罩是正确的格式 172 | if not isinstance(input_mask, torch.Tensor): 173 | raise ValueError("Input mask must be a torch.Tensor") 174 | 175 | # 处理遮罩维度 176 | if input_mask.dim() == 4: 177 | input_mask = input_mask.squeeze(0) 178 | if input_mask.dim() == 3 and input_mask.shape[0] == 1: 179 | input_mask = input_mask.squeeze(0) 180 | 181 | # 确保遮罩尺寸与图像匹配 182 | if input_image is not None: 183 | expected_shape = input_image.shape[:2] 184 | if input_mask.shape != expected_shape: 185 | input_mask = F.interpolate( 186 | input_mask.unsqueeze(0).unsqueeze(0), 187 | size=expected_shape, 188 | mode='bilinear', 189 | align_corners=False 190 | ).squeeze() 191 | 192 | return input_mask 193 | 194 | except Exception as e: 195 | print(f"Error in add_mask_to_canvas: {str(e)}") 196 | return None 197 | 198 | def process_canvas_image(self, canvas_image, trigger, output_switch, cache_enabled, input_image=None, input_mask=None): 199 | try: 200 | current_execution = self.get_execution_id() 201 | print(f"Processing canvas image, execution ID: {current_execution}") 202 | 203 | # 检查是否是新的执行 204 | if current_execution != self.__class__._canvas_cache['last_execution_id']: 205 | print(f"New execution detected: {current_execution}") 206 | # 清除旧的缓存 207 | self.__class__._canvas_cache['image'] = None 208 | self.__class__._canvas_cache['mask'] = None 209 | self.__class__._canvas_cache['last_execution_id'] = current_execution 210 | 211 | # 处理输入图像 212 | if input_image is not None: 213 | print("Input image received, converting to PIL Image...") 214 | # 将tensor转换为PIL Image并存储到缓存 215 | if isinstance(input_image, torch.Tensor): 216 | if input_image.dim() == 4: 217 | input_image = input_image.squeeze(0) # 移除batch维度 218 | 219 | # 确保图像格式为[H, W, C] 220 | if input_image.shape[0] == 3: # 如果是[C, H, W]格式 221 | input_image = input_image.permute(1, 2, 0) 222 | 223 | # 转换为numpy数组并确保值范围在0-255 224 | image_array = (input_image.cpu().numpy() * 255).astype(np.uint8) 225 | 226 | # 确保数组形状正确 227 | if len(image_array.shape) == 2: # 如果是灰度图 228 | image_array = np.stack([image_array] * 3, axis=-1) 229 | elif len(image_array.shape) == 3 and image_array.shape[-1] != 3: 230 | image_array = np.transpose(image_array, (1, 2, 0)) 231 | 232 | try: 233 | # 转换为PIL Image 234 | pil_image = Image.fromarray(image_array, 'RGB') 235 | print("Successfully converted to PIL Image") 236 | # 存储PIL Image到缓存 237 | self.__class__._canvas_cache['image'] = pil_image 238 | print(f"Image stored in cache with size: {pil_image.size}") 239 | except Exception as e: 240 | print(f"Error converting to PIL Image: {str(e)}") 241 | print(f"Array shape: {image_array.shape}, dtype: {image_array.dtype}") 242 | raise 243 | 244 | # 处理输入遮罩 245 | if input_mask is not None: 246 | print("Input mask received, converting to PIL Image...") 247 | if isinstance(input_mask, torch.Tensor): 248 | if input_mask.dim() == 4: 249 | input_mask = input_mask.squeeze(0) 250 | if input_mask.dim() == 3 and input_mask.shape[0] == 1: 251 | input_mask = input_mask.squeeze(0) 252 | 253 | # 转换为PIL Image 254 | mask_array = (input_mask.cpu().numpy() * 255).astype(np.uint8) 255 | pil_mask = Image.fromarray(mask_array, 'L') 256 | print("Successfully converted mask to PIL Image") 257 | # 存储遮罩到缓存 258 | self.__class__._canvas_cache['mask'] = pil_mask 259 | print(f"Mask stored in cache with size: {pil_mask.size}") 260 | 261 | # 更新缓存开关状态 262 | self.__class__._canvas_cache['cache_enabled'] = cache_enabled 263 | 264 | try: 265 | # 尝试读取画布图像 266 | path_image = folder_paths.get_annotated_filepath(canvas_image) 267 | i = Image.open(path_image) 268 | i = ImageOps.exif_transpose(i) 269 | if i.mode not in ['RGB', 'RGBA']: 270 | i = i.convert('RGB') 271 | image = np.array(i).astype(np.float32) / 255.0 272 | if i.mode == 'RGBA': 273 | rgb = image[..., :3] 274 | alpha = image[..., 3:] 275 | image = rgb * alpha + (1 - alpha) * 0.5 276 | processed_image = torch.from_numpy(image)[None,] 277 | except Exception as e: 278 | # 如果读取失败,创建白色画布 279 | processed_image = torch.ones((1, 512, 512, 3), dtype=torch.float32) 280 | 281 | try: 282 | # 尝试读取遮罩图像 283 | path_mask = path_image.replace('.png', '_mask.png') 284 | if os.path.exists(path_mask): 285 | mask = Image.open(path_mask).convert('L') 286 | mask = np.array(mask).astype(np.float32) / 255.0 287 | processed_mask = torch.from_numpy(mask)[None,] 288 | else: 289 | # 如果没有遮罩文件,创建全白遮罩 290 | processed_mask = torch.ones((1, processed_image.shape[1], processed_image.shape[2]), dtype=torch.float32) 291 | except Exception as e: 292 | print(f"Error loading mask: {str(e)}") 293 | # 创建默认遮罩 294 | processed_mask = torch.ones((1, processed_image.shape[1], processed_image.shape[2]), dtype=torch.float32) 295 | 296 | # 输出处理 297 | if not output_switch: 298 | return () 299 | 300 | # 更新持久化缓存 301 | self.update_persistent_cache() 302 | 303 | # 返回处理后的图像和遮罩 304 | return (processed_image, processed_mask) 305 | 306 | except Exception as e: 307 | print(f"Error in process_canvas_image: {str(e)}") 308 | traceback.print_exc() 309 | return () 310 | 311 | # 添加获取缓存数据的方法 312 | def get_cached_data(self): 313 | return { 314 | 'image': self.__class__._canvas_cache['image'], 315 | 'mask': self.__class__._canvas_cache['mask'] 316 | } 317 | 318 | # 添加API路由处理器 319 | @classmethod 320 | def api_get_data(cls, node_id): 321 | try: 322 | return { 323 | 'success': True, 324 | 'data': cls._canvas_cache 325 | } 326 | except Exception as e: 327 | return { 328 | 'success': False, 329 | 'error': str(e) 330 | } 331 | 332 | @classmethod 333 | def get_flow_status(cls, flow_id=None): 334 | """获取数据流状态""" 335 | if flow_id: 336 | return cls._canvas_cache['data_flow_status'].get(flow_id) 337 | return cls._canvas_cache['data_flow_status'] 338 | 339 | @classmethod 340 | def setup_routes(cls): 341 | @PromptServer.instance.routes.get("/ycnode/get_canvas_data/{node_id}") 342 | async def get_canvas_data(request): 343 | try: 344 | node_id = request.match_info["node_id"] 345 | print(f"Received request for node: {node_id}") 346 | 347 | cache_data = cls._canvas_cache 348 | print(f"Cache content: {cache_data}") 349 | print(f"Image in cache: {cache_data['image'] is not None}") 350 | 351 | response_data = { 352 | 'success': True, 353 | 'data': { 354 | 'image': None, 355 | 'mask': None 356 | } 357 | } 358 | 359 | if cache_data['image'] is not None: 360 | pil_image = cache_data['image'] 361 | buffered = io.BytesIO() 362 | pil_image.save(buffered, format="PNG") 363 | img_str = base64.b64encode(buffered.getvalue()).decode() 364 | response_data['data']['image'] = f"data:image/png;base64,{img_str}" 365 | 366 | if cache_data['mask'] is not None: 367 | pil_mask = cache_data['mask'] 368 | mask_buffer = io.BytesIO() 369 | pil_mask.save(mask_buffer, format="PNG") 370 | mask_str = base64.b64encode(mask_buffer.getvalue()).decode() 371 | response_data['data']['mask'] = f"data:image/png;base64,{mask_str}" 372 | 373 | return web.json_response(response_data) 374 | 375 | except Exception as e: 376 | print(f"Error in get_canvas_data: {str(e)}") 377 | return web.json_response({ 378 | 'success': False, 379 | 'error': str(e) 380 | }) 381 | 382 | def store_image(self, image_data): 383 | # 将base64数据转换为PIL Image并存储 384 | if isinstance(image_data, str) and image_data.startswith('data:image'): 385 | image_data = image_data.split(',')[1] 386 | image_bytes = base64.b64decode(image_data) 387 | self.cached_image = Image.open(io.BytesIO(image_bytes)) 388 | else: 389 | self.cached_image = image_data 390 | 391 | def get_cached_image(self): 392 | # 将PIL Image转换为base64 393 | if self.cached_image: 394 | buffered = io.BytesIO() 395 | self.cached_image.save(buffered, format="PNG") 396 | img_str = base64.b64encode(buffered.getvalue()).decode() 397 | return f"data:image/png;base64,{img_str}" 398 | return None 399 | 400 | class BiRefNetMatting: 401 | def __init__(self): 402 | self.model = None 403 | self.model_path = None 404 | self.model_cache = {} 405 | # 使用 ComfyUI models 目录 406 | self.base_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "models") 407 | 408 | def load_model(self, model_path): 409 | try: 410 | if model_path not in self.model_cache: 411 | # 使用 ComfyUI models 目录下的 BiRefNet 路径 412 | full_model_path = os.path.join(self.base_path, "BiRefNet") 413 | 414 | print(f"Loading BiRefNet model from {full_model_path}...") 415 | 416 | try: 417 | # 直接从Hugging Face加载 418 | self.model = AutoModelForImageSegmentation.from_pretrained( 419 | "ZhengPeng7/BiRefNet", 420 | trust_remote_code=True, 421 | cache_dir=full_model_path # 使用本地缓存目录 422 | ) 423 | 424 | # 设置为评估模式并移动到GPU 425 | self.model.eval() 426 | if torch.cuda.is_available(): 427 | self.model = self.model.cuda() 428 | 429 | self.model_cache[model_path] = self.model 430 | print("Model loaded successfully from Hugging Face") 431 | print(f"Model type: {type(self.model)}") 432 | print(f"Model device: {next(self.model.parameters()).device}") 433 | 434 | except Exception as e: 435 | print(f"Failed to load model: {str(e)}") 436 | raise 437 | 438 | else: 439 | self.model = self.model_cache[model_path] 440 | print("Using cached model") 441 | 442 | return True 443 | 444 | except Exception as e: 445 | print(f"Error loading model: {str(e)}") 446 | traceback.print_exc() 447 | return False 448 | 449 | def preprocess_image(self, image): 450 | """预处理输入图像""" 451 | try: 452 | # 转换为PIL图像 453 | if isinstance(image, torch.Tensor): 454 | if image.dim() == 4: 455 | image = image.squeeze(0) 456 | if image.dim() == 3: 457 | image = transforms.ToPILImage()(image) 458 | 459 | # 参考nodes.py的预处理 460 | transform_image = transforms.Compose([ 461 | transforms.Resize((1024, 1024)), 462 | transforms.ToTensor(), 463 | transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) 464 | ]) 465 | 466 | # 转换为tensor并添加batch维度 467 | image_tensor = transform_image(image).unsqueeze(0) 468 | 469 | if torch.cuda.is_available(): 470 | image_tensor = image_tensor.cuda() 471 | 472 | return image_tensor 473 | except Exception as e: 474 | print(f"Error preprocessing image: {str(e)}") 475 | return None 476 | 477 | def execute(self, image, model_path, threshold=0.5, refinement=1): 478 | try: 479 | # 发送开始状态 480 | PromptServer.instance.send_sync("matting_status", {"status": "processing"}) 481 | 482 | # 加载模型 483 | if not self.load_model(model_path): 484 | raise RuntimeError("Failed to load model") 485 | 486 | # 获取原始尺寸 487 | if isinstance(image, torch.Tensor): 488 | original_size = image.shape[-2:] if image.dim() == 4 else image.shape[-2:] 489 | else: 490 | original_size = image.size[::-1] 491 | 492 | print(f"Original size: {original_size}") 493 | 494 | # 预处理图像 495 | processed_image = self.preprocess_image(image) 496 | if processed_image is None: 497 | raise Exception("Failed to preprocess image") 498 | 499 | print(f"Processed image shape: {processed_image.shape}") 500 | 501 | # 执行推理 502 | with torch.no_grad(): 503 | outputs = self.model(processed_image) 504 | result = outputs[-1].sigmoid().cpu() 505 | print(f"Model output shape: {result.shape}") 506 | 507 | # 确保结果有正的维度格式 [B, C, H, W] 508 | if result.dim() == 3: 509 | result = result.unsqueeze(1) # 添加通道维度 510 | elif result.dim() == 2: 511 | result = result.unsqueeze(0).unsqueeze(0) # 添加batch和通道维度 512 | 513 | print(f"Reshaped result shape: {result.shape}") 514 | 515 | # 调整大小 516 | result = F.interpolate( 517 | result, 518 | size=(original_size[0], original_size[1]), # 明确指定高度和宽度 519 | mode='bilinear', 520 | align_corners=True 521 | ) 522 | print(f"Resized result shape: {result.shape}") 523 | 524 | # 归一化 525 | result = result.squeeze() # 移除多余的维度 526 | ma = torch.max(result) 527 | mi = torch.min(result) 528 | result = (result-mi)/(ma-mi) 529 | 530 | # 应用阈值 531 | if threshold > 0: 532 | result = (result > threshold).float() 533 | 534 | # 创建mask和结果图像 535 | alpha_mask = result.unsqueeze(0).unsqueeze(0) # 确保mask是 [1, 1, H, W] 536 | if isinstance(image, torch.Tensor): 537 | if image.dim() == 3: 538 | image = image.unsqueeze(0) 539 | masked_image = image * alpha_mask 540 | else: 541 | image_tensor = transforms.ToTensor()(image).unsqueeze(0) 542 | masked_image = image_tensor * alpha_mask 543 | 544 | # 发送完成状态 545 | PromptServer.instance.send_sync("matting_status", {"status": "completed"}) 546 | 547 | return (masked_image, alpha_mask) 548 | 549 | except Exception as e: 550 | # 发送错误状态 551 | PromptServer.instance.send_sync("matting_status", {"status": "error"}) 552 | raise e 553 | 554 | @classmethod 555 | def IS_CHANGED(cls, image, model_path, threshold, refinement): 556 | """检查输入是否改变""" 557 | m = hashlib.md5() 558 | m.update(str(image).encode()) 559 | m.update(str(model_path).encode()) 560 | m.update(str(threshold).encode()) 561 | m.update(str(refinement).encode()) 562 | return m.hexdigest() 563 | 564 | @PromptServer.instance.routes.post("/matting") 565 | async def matting(request): 566 | try: 567 | print("Received matting request") 568 | data = await request.json() 569 | 570 | # 取BiRefNet实例 571 | matting = BiRefNetMatting() 572 | 573 | # 处理图像数据,现在返回图像tensor和alpha通道 574 | image_tensor, original_alpha = convert_base64_to_tensor(data["image"]) 575 | print(f"Input image shape: {image_tensor.shape}") 576 | 577 | # 执行抠图 578 | matted_image, alpha_mask = matting.execute( 579 | image_tensor, 580 | "BiRefNet/model.safetensors", 581 | threshold=data.get("threshold", 0.5), 582 | refinement=data.get("refinement", 1) 583 | ) 584 | 585 | # 转换结果为base64,包含原始alpha信息 586 | result_image = convert_tensor_to_base64(matted_image, alpha_mask, original_alpha) 587 | result_mask = convert_tensor_to_base64(alpha_mask) 588 | 589 | return web.json_response({ 590 | "matted_image": result_image, 591 | "alpha_mask": result_mask 592 | }) 593 | 594 | except Exception as e: 595 | print(f"Error in matting endpoint: {str(e)}") 596 | import traceback 597 | traceback.print_exc() 598 | return web.json_response({ 599 | "error": str(e), 600 | "details": traceback.format_exc() 601 | }, status=500) 602 | 603 | def convert_base64_to_tensor(base64_str): 604 | """将base64图像数据转换为tensor,保留alpha通道""" 605 | import base64 606 | import io 607 | 608 | try: 609 | # 解码base64数据 610 | img_data = base64.b64decode(base64_str.split(',')[1]) 611 | img = Image.open(io.BytesIO(img_data)) 612 | 613 | # 保存原始alpha通道 614 | has_alpha = img.mode == 'RGBA' 615 | alpha = None 616 | if has_alpha: 617 | alpha = img.split()[3] 618 | # 创建白色背景 619 | background = Image.new('RGB', img.size, (255, 255, 255)) 620 | background.paste(img, mask=alpha) 621 | img = background 622 | elif img.mode != 'RGB': 623 | img = img.convert('RGB') 624 | 625 | # 转换为tensor 626 | transform = transforms.ToTensor() 627 | img_tensor = transform(img).unsqueeze(0) # [1, C, H, W] 628 | 629 | if has_alpha: 630 | # 将alpha转换为tensor并保存 631 | alpha_tensor = transforms.ToTensor()(alpha).unsqueeze(0) # [1, 1, H, W] 632 | return img_tensor, alpha_tensor 633 | 634 | return img_tensor, None 635 | 636 | except Exception as e: 637 | print(f"Error in convert_base64_to_tensor: {str(e)}") 638 | raise 639 | 640 | def convert_tensor_to_base64(tensor, alpha_mask=None, original_alpha=None): 641 | """将tensor转换为base64图像数据,支持alpha通道""" 642 | import base64 643 | import io 644 | 645 | try: 646 | # 确保tensor在CPU上 647 | tensor = tensor.cpu() 648 | 649 | # 处理维度 650 | if tensor.dim() == 4: 651 | tensor = tensor.squeeze(0) # 移除batch维度 652 | if tensor.dim() == 3 and tensor.shape[0] in [1, 3]: 653 | tensor = tensor.permute(1, 2, 0) 654 | 655 | # 转换为numpy数组并调整值范围到0-255 656 | img_array = (tensor.numpy() * 255).astype(np.uint8) 657 | 658 | # 如果有alpha遮罩和原始alpha 659 | if alpha_mask is not None and original_alpha is not None: 660 | # 将alpha_mask转换为正确的格式 661 | alpha_mask = alpha_mask.cpu().squeeze().numpy() 662 | alpha_mask = (alpha_mask * 255).astype(np.uint8) 663 | 664 | # 将原始alpha转换为正确的格式 665 | original_alpha = original_alpha.cpu().squeeze().numpy() 666 | original_alpha = (original_alpha * 255).astype(np.uint8) 667 | 668 | # 组合alpha_mask和original_alpha 669 | combined_alpha = np.minimum(alpha_mask, original_alpha) 670 | 671 | # 创建RGBA图像 672 | img = Image.fromarray(img_array, mode='RGB') 673 | alpha_img = Image.fromarray(combined_alpha, mode='L') 674 | img.putalpha(alpha_img) 675 | else: 676 | # 处理没有alpha通道的情况 677 | if img_array.shape[-1] == 1: 678 | img_array = img_array.squeeze(-1) 679 | img = Image.fromarray(img_array, mode='L') 680 | else: 681 | img = Image.fromarray(img_array, mode='RGB') 682 | 683 | # 转换为base64 684 | buffer = io.BytesIO() 685 | img.save(buffer, format='PNG') 686 | img_str = base64.b64encode(buffer.getvalue()).decode() 687 | 688 | return f"data:image/png;base64,{img_str}" 689 | 690 | except Exception as e: 691 | print(f"Error in convert_tensor_to_base64: {str(e)}") 692 | print(f"Tensor shape: {tensor.shape}, dtype: {tensor.dtype}") 693 | raise 694 | -------------------------------------------------------------------------------- /js/Canvas.js: -------------------------------------------------------------------------------- 1 | export class Canvas { 2 | constructor(node, widget) { 3 | this.node = node; 4 | this.widget = widget; 5 | this.canvas = document.createElement('canvas'); 6 | this.ctx = this.canvas.getContext('2d'); 7 | this.width = 512; 8 | this.height = 512; 9 | this.layers = []; 10 | this.selectedLayer = null; 11 | this.isRotating = false; 12 | this.rotationStartAngle = 0; 13 | this.rotationCenter = { x: 0, y: 0 }; 14 | this.selectedLayers = []; 15 | this.isCtrlPressed = false; 16 | 17 | this.offscreenCanvas = document.createElement('canvas'); 18 | this.offscreenCtx = this.offscreenCanvas.getContext('2d', { 19 | alpha: false 20 | }); 21 | this.gridCache = document.createElement('canvas'); 22 | this.gridCacheCtx = this.gridCache.getContext('2d', { 23 | alpha: false 24 | }); 25 | 26 | this.renderAnimationFrame = null; 27 | this.lastRenderTime = 0; 28 | this.renderInterval = 1000 / 60; 29 | this.isDirty = false; 30 | 31 | this.dataInitialized = false; 32 | this.pendingDataCheck = null; 33 | 34 | this.initCanvas(); 35 | this.setupEventListeners(); 36 | this.initNodeData(); 37 | 38 | // 添加混合模式列表 39 | this.blendModes = [ 40 | { name: 'normal', label: '正常' }, 41 | { name: 'multiply', label: '正片叠底' }, 42 | { name: 'screen', label: '滤色' }, 43 | { name: 'overlay', label: '叠加' }, 44 | { name: 'darken', label: '变暗' }, 45 | { name: 'lighten', label: '变亮' }, 46 | { name: 'color-dodge', label: '颜色减淡' }, 47 | { name: 'color-burn', label: '颜色加深' }, 48 | { name: 'hard-light', label: '强光' }, 49 | { name: 'soft-light', label: '柔光' }, 50 | { name: 'difference', label: '差值' }, 51 | { name: 'exclusion', label: '排除' } 52 | ]; 53 | 54 | this.selectedBlendMode = null; 55 | this.blendOpacity = 100; 56 | this.isAdjustingOpacity = false; 57 | 58 | // 添加不透明度属性 59 | this.layers = this.layers.map(layer => ({ 60 | ...layer, 61 | opacity: 1 // 默认不透明度为 1 62 | })); 63 | } 64 | 65 | initCanvas() { 66 | this.canvas.width = this.width; 67 | this.canvas.height = this.height; 68 | this.canvas.style.border = '1px solid black'; 69 | this.canvas.style.maxWidth = '100%'; 70 | this.canvas.style.backgroundColor = '#606060'; 71 | } 72 | 73 | setupEventListeners() { 74 | let isDragging = false; 75 | let lastX = 0; 76 | let lastY = 0; 77 | let isRotating = false; 78 | let isResizing = false; 79 | let resizeHandle = null; 80 | let lastClickTime = 0; 81 | let isAltPressed = false; 82 | let dragStartX = 0; 83 | let dragStartY = 0; 84 | let originalWidth = 0; 85 | let originalHeight = 0; 86 | 87 | document.addEventListener('keydown', (e) => { 88 | if (e.key === 'Control') { 89 | this.isCtrlPressed = true; 90 | } 91 | if (e.key === 'Alt') { 92 | isAltPressed = true; 93 | e.preventDefault(); 94 | } 95 | if (e.key === 'Delete' && this.selectedLayer) { 96 | const index = this.layers.indexOf(this.selectedLayer); 97 | this.removeLayer(index); 98 | } 99 | }); 100 | 101 | document.addEventListener('keyup', (e) => { 102 | if (e.key === 'Control') { 103 | this.isCtrlPressed = false; 104 | } 105 | if (e.key === 'Alt') { 106 | isAltPressed = false; 107 | } 108 | }); 109 | 110 | this.canvas.addEventListener('mousedown', (e) => { 111 | const currentTime = new Date().getTime(); 112 | const rect = this.canvas.getBoundingClientRect(); 113 | const mouseX = e.clientX - rect.left; 114 | const mouseY = e.clientY - rect.top; 115 | 116 | if (currentTime - lastClickTime < 300) { 117 | this.selectedLayers = []; 118 | this.selectedLayer = null; 119 | this.render(); 120 | return; 121 | } 122 | lastClickTime = currentTime; 123 | 124 | const result = this.getLayerAtPosition(mouseX, mouseY); 125 | 126 | if (result) { 127 | const clickedLayer = result.layer; 128 | 129 | dragStartX = mouseX; 130 | dragStartY = mouseY; 131 | if (clickedLayer) { 132 | originalWidth = clickedLayer.width; 133 | originalHeight = clickedLayer.height; 134 | } 135 | 136 | if (this.isCtrlPressed) { 137 | const index = this.selectedLayers.indexOf(clickedLayer); 138 | if (index === -1) { 139 | this.selectedLayers.push(clickedLayer); 140 | this.selectedLayer = clickedLayer; 141 | } else { 142 | this.selectedLayers.splice(index, 1); 143 | this.selectedLayer = this.selectedLayers[this.selectedLayers.length - 1] || null; 144 | } 145 | } else { 146 | if (!this.selectedLayers.includes(clickedLayer)) { 147 | this.selectedLayers = [clickedLayer]; 148 | this.selectedLayer = clickedLayer; 149 | } 150 | } 151 | 152 | if (this.isRotationHandle(mouseX, mouseY)) { 153 | isRotating = true; 154 | this.rotationCenter.x = this.selectedLayer.x + this.selectedLayer.width/2; 155 | this.rotationCenter.y = this.selectedLayer.y + this.selectedLayer.height/2; 156 | this.rotationStartAngle = Math.atan2( 157 | mouseY - this.rotationCenter.y, 158 | mouseX - this.rotationCenter.x 159 | ); 160 | } else { 161 | isDragging = true; 162 | lastX = mouseX; 163 | lastY = mouseY; 164 | } 165 | } else { 166 | if (!this.isCtrlPressed) { 167 | this.selectedLayers = []; 168 | this.selectedLayer = null; 169 | } 170 | } 171 | this.render(); 172 | }); 173 | 174 | this.canvas.addEventListener('mousemove', (e) => { 175 | if (!this.selectedLayer) return; 176 | 177 | const rect = this.canvas.getBoundingClientRect(); 178 | const mouseX = e.clientX - rect.left; 179 | const mouseY = e.clientY - rect.top; 180 | 181 | if (isDragging && isAltPressed) { 182 | const dx = mouseX - dragStartX; 183 | const dy = mouseY - dragStartY; 184 | 185 | if (Math.abs(dx) > Math.abs(dy)) { 186 | this.selectedLayer.width = Math.max(20, originalWidth + dx); 187 | } else { 188 | this.selectedLayer.height = Math.max(20, originalHeight + dy); 189 | } 190 | 191 | this.render(); 192 | } else if (isDragging && !isAltPressed) { 193 | const dx = mouseX - lastX; 194 | const dy = mouseY - lastY; 195 | 196 | this.selectedLayers.forEach(layer => { 197 | layer.x += dx; 198 | layer.y += dy; 199 | }); 200 | 201 | lastX = mouseX; 202 | lastY = mouseY; 203 | this.render(); 204 | } 205 | 206 | const cursor = isAltPressed && isDragging 207 | ? (Math.abs(mouseX - dragStartX) > Math.abs(mouseY - dragStartY) ? 'ew-resize' : 'ns-resize') 208 | : this.getResizeHandle(mouseX, mouseY) 209 | ? 'nw-resize' 210 | : this.isRotationHandle(mouseX, mouseY) 211 | ? 'grab' 212 | : isDragging ? 'move' : 'default'; 213 | this.canvas.style.cursor = cursor; 214 | }); 215 | 216 | this.canvas.addEventListener('mouseup', () => { 217 | isDragging = false; 218 | isRotating = false; 219 | }); 220 | 221 | this.canvas.addEventListener('mouseleave', () => { 222 | isDragging = false; 223 | isRotating = false; 224 | }); 225 | 226 | // 添加鼠标滚轮缩放功能 227 | this.canvas.addEventListener('wheel', (e) => { 228 | if (!this.selectedLayer) return; 229 | 230 | e.preventDefault(); 231 | const scaleFactor = e.deltaY > 0 ? 0.95 : 1.05; 232 | 233 | // 如果按住Shift键,则进行旋转而不是缩放 234 | if (e.shiftKey) { 235 | const rotateAngle = e.deltaY > 0 ? -5 : 5; 236 | this.selectedLayers.forEach(layer => { 237 | layer.rotation = (layer.rotation + rotateAngle) % 360; 238 | }); 239 | } else { 240 | // 从鼠标位置为中心进行缩放 241 | const rect = this.canvas.getBoundingClientRect(); 242 | const mouseX = e.clientX - rect.left; 243 | const mouseY = e.clientY - rect.top; 244 | 245 | this.selectedLayers.forEach(layer => { 246 | const centerX = layer.x + layer.width/2; 247 | const centerY = layer.y + layer.height/2; 248 | 249 | // 计算鼠标相对于图中心的位置 250 | const relativeX = mouseX - centerX; 251 | const relativeY = mouseY - centerY; 252 | 253 | // 更新尺寸 254 | const oldWidth = layer.width; 255 | const oldHeight = layer.height; 256 | layer.width *= scaleFactor; 257 | layer.height *= scaleFactor; 258 | 259 | // 调整位置以保持鼠标指向的点不变 260 | layer.x += (oldWidth - layer.width) / 2; 261 | layer.y += (oldHeight - layer.height) / 2; 262 | }); 263 | } 264 | this.render(); 265 | }); 266 | 267 | // 优化旋转控制逻辑 268 | let initialRotation = 0; 269 | let initialAngle = 0; 270 | 271 | this.canvas.addEventListener('mousemove', (e) => { 272 | // ... 其他代码保持不变 ... 273 | 274 | if (isRotating) { 275 | const rect = this.canvas.getBoundingClientRect(); 276 | const mouseX = e.clientX - rect.left; 277 | const mouseY = e.clientY - rect.top; 278 | 279 | const centerX = this.selectedLayer.x + this.selectedLayer.width/2; 280 | const centerY = this.selectedLayer.y + this.selectedLayer.height/2; 281 | 282 | // 计算当前角度 283 | const angle = Math.atan2(mouseY - centerY, mouseX - centerX) * 180 / Math.PI; 284 | 285 | if (e.shiftKey) { 286 | // 按住Shift键时启用15度角度吸附 287 | const snap = 15; 288 | const rotation = Math.round((angle - initialAngle + initialRotation) / snap) * snap; 289 | this.selectedLayers.forEach(layer => { 290 | layer.rotation = rotation; 291 | }); 292 | } else { 293 | // 正常旋转 294 | const rotation = angle - initialAngle + initialRotation; 295 | this.selectedLayers.forEach(layer => { 296 | layer.rotation = rotation; 297 | }); 298 | } 299 | this.render(); 300 | } 301 | }); 302 | 303 | this.canvas.addEventListener('mousedown', (e) => { 304 | // ... 其他代码保持不变 ... 305 | 306 | if (this.isRotationHandle(mouseX, mouseY)) { 307 | isRotating = true; 308 | const centerX = this.selectedLayer.x + this.selectedLayer.width/2; 309 | const centerY = this.selectedLayer.y + this.selectedLayer.height/2; 310 | initialRotation = this.selectedLayer.rotation; 311 | initialAngle = Math.atan2(mouseY - centerY, mouseX - centerX) * 180 / Math.PI; 312 | } 313 | }); 314 | 315 | // 添加键盘快捷键 316 | document.addEventListener('keydown', (e) => { 317 | if (!this.selectedLayer) return; 318 | 319 | const step = e.shiftKey ? 1 : 5; // Shift键按下时更精细的控制 320 | 321 | switch(e.key) { 322 | case 'ArrowLeft': 323 | this.selectedLayers.forEach(layer => layer.x -= step); 324 | break; 325 | case 'ArrowRight': 326 | this.selectedLayers.forEach(layer => layer.x += step); 327 | break; 328 | case 'ArrowUp': 329 | this.selectedLayers.forEach(layer => layer.y -= step); 330 | break; 331 | case 'ArrowDown': 332 | this.selectedLayers.forEach(layer => layer.y += step); 333 | break; 334 | case '[': 335 | this.selectedLayers.forEach(layer => layer.rotation -= step); 336 | break; 337 | case ']': 338 | this.selectedLayers.forEach(layer => layer.rotation += step); 339 | break; 340 | } 341 | 342 | if (['ArrowLeft', 'ArrowRight', 'ArrowUp', 'ArrowDown', '[', ']'].includes(e.key)) { 343 | e.preventDefault(); 344 | this.render(); 345 | } 346 | }); 347 | 348 | this.canvas.addEventListener('mousedown', (e) => { 349 | const rect = this.canvas.getBoundingClientRect(); 350 | const mouseX = e.clientX - rect.left; 351 | const mouseY = e.clientY - rect.top; 352 | 353 | if (e.shiftKey) { 354 | const result = this.getLayerAtPosition(mouseX, mouseY); 355 | if (result) { 356 | this.selectedLayer = result.layer; 357 | this.showBlendModeMenu(e.clientX, e.clientY); 358 | e.preventDefault(); // 阻止默认行为 359 | return; 360 | } 361 | } 362 | 363 | // ... 其余现的mousedown处理代 ... 364 | }); 365 | } 366 | 367 | isRotationHandle(x, y) { 368 | if (!this.selectedLayer) return false; 369 | 370 | const handleX = this.selectedLayer.x + this.selectedLayer.width/2; 371 | const handleY = this.selectedLayer.y - 20; 372 | const handleRadius = 5; 373 | 374 | return Math.sqrt(Math.pow(x - handleX, 2) + Math.pow(y - handleY, 2)) <= handleRadius; 375 | } 376 | 377 | addLayer(image) { 378 | try { 379 | console.log("Adding layer with image:", image); 380 | 381 | const layer = { 382 | image: image, 383 | x: (this.width - image.width) / 2, 384 | y: (this.height - image.height) / 2, 385 | width: image.width, 386 | height: image.height, 387 | rotation: 0, 388 | zIndex: this.layers.length, 389 | blendMode: 'normal', // 添加默认混合模式 390 | opacity: 1 // 添加默认透明度 391 | }; 392 | 393 | this.layers.push(layer); 394 | this.selectedLayer = layer; 395 | this.render(); 396 | 397 | console.log("Layer added successfully"); 398 | } catch (error) { 399 | console.error("Error adding layer:", error); 400 | throw error; 401 | } 402 | } 403 | 404 | removeLayer(index) { 405 | if (index >= 0 && index < this.layers.length) { 406 | this.layers.splice(index, 1); 407 | this.selectedLayer = this.layers[this.layers.length - 1] || null; 408 | this.render(); 409 | } 410 | } 411 | 412 | moveLayer(fromIndex, toIndex) { 413 | if (fromIndex >= 0 && fromIndex < this.layers.length && 414 | toIndex >= 0 && toIndex < this.layers.length) { 415 | const layer = this.layers.splice(fromIndex, 1)[0]; 416 | this.layers.splice(toIndex, 0, layer); 417 | this.render(); 418 | } 419 | } 420 | 421 | resizeLayer(scale) { 422 | this.selectedLayers.forEach(layer => { 423 | layer.width *= scale; 424 | layer.height *= scale; 425 | }); 426 | this.render(); 427 | } 428 | 429 | rotateLayer(angle) { 430 | this.selectedLayers.forEach(layer => { 431 | layer.rotation += angle; 432 | }); 433 | this.render(); 434 | } 435 | 436 | updateCanvasSize(width, height) { 437 | this.width = width; 438 | this.height = height; 439 | 440 | this.canvas.width = width; 441 | this.canvas.height = height; 442 | 443 | // 调整所有图层的位置和大小 444 | this.layers.forEach(layer => { 445 | const scale = Math.min( 446 | width / layer.image.width * 0.8, 447 | height / layer.image.height * 0.8 448 | ); 449 | layer.width = layer.image.width * scale; 450 | layer.height = layer.image.height * scale; 451 | layer.x = (width - layer.width) / 2; 452 | layer.y = (height - layer.height) / 2; 453 | }); 454 | 455 | this.render(); 456 | } 457 | 458 | render() { 459 | if (this.renderAnimationFrame) { 460 | this.isDirty = true; 461 | return; 462 | } 463 | 464 | this.renderAnimationFrame = requestAnimationFrame(() => { 465 | const now = performance.now(); 466 | if (now - this.lastRenderTime >= this.renderInterval) { 467 | this.lastRenderTime = now; 468 | this.actualRender(); 469 | this.isDirty = false; 470 | } 471 | 472 | if (this.isDirty) { 473 | this.renderAnimationFrame = null; 474 | this.render(); 475 | } else { 476 | this.renderAnimationFrame = null; 477 | } 478 | }); 479 | } 480 | 481 | actualRender() { 482 | if (this.offscreenCanvas.width !== this.width || 483 | this.offscreenCanvas.height !== this.height) { 484 | this.offscreenCanvas.width = this.width; 485 | this.offscreenCanvas.height = this.height; 486 | } 487 | 488 | const ctx = this.offscreenCtx; 489 | 490 | ctx.fillStyle = '#606060'; 491 | ctx.fillRect(0, 0, this.width, this.height); 492 | 493 | this.drawCachedGrid(); 494 | 495 | const sortedLayers = [...this.layers].sort((a, b) => a.zIndex - b.zIndex); 496 | 497 | sortedLayers.forEach(layer => { 498 | if (!layer.image) return; 499 | 500 | ctx.save(); 501 | 502 | // 应用混合模式和不透明度 503 | ctx.globalCompositeOperation = layer.blendMode || 'normal'; 504 | ctx.globalAlpha = layer.opacity !== undefined ? layer.opacity : 1; 505 | 506 | const centerX = layer.x + layer.width/2; 507 | const centerY = layer.y + layer.height/2; 508 | const rad = layer.rotation * Math.PI / 180; 509 | 510 | // 1. 先设置变换 511 | ctx.setTransform( 512 | Math.cos(rad), Math.sin(rad), 513 | -Math.sin(rad), Math.cos(rad), 514 | centerX, centerY 515 | ); 516 | 517 | ctx.imageSmoothingEnabled = true; 518 | ctx.imageSmoothingQuality = 'high'; 519 | 520 | // 2. 先绘制原始图像 521 | ctx.drawImage( 522 | layer.image, 523 | -layer.width/2, 524 | -layer.height/2, 525 | layer.width, 526 | layer.height 527 | ); 528 | 529 | // 3. 再应用遮罩 530 | if (layer.mask) { 531 | try { 532 | console.log("Applying mask to layer"); 533 | const maskCanvas = document.createElement('canvas'); 534 | const maskCtx = maskCanvas.getContext('2d'); 535 | maskCanvas.width = layer.width; 536 | maskCanvas.height = layer.height; 537 | 538 | const maskImageData = maskCtx.createImageData(layer.width, layer.height); 539 | const maskData = new Float32Array(layer.mask); 540 | for (let i = 0; i < maskData.length; i++) { 541 | maskImageData.data[i * 4] = 542 | maskImageData.data[i * 4 + 1] = 543 | maskImageData.data[i * 4 + 2] = 255; 544 | maskImageData.data[i * 4 + 3] = maskData[i] * 255; 545 | } 546 | maskCtx.putImageData(maskImageData, 0, 0); 547 | 548 | // 使用destination-in混合模式 549 | ctx.globalCompositeOperation = 'destination-in'; 550 | ctx.drawImage(maskCanvas, 551 | -layer.width/2, -layer.height/2, 552 | layer.width, layer.height 553 | ); 554 | 555 | console.log("Mask applied successfully"); 556 | } catch (error) { 557 | console.error("Error applying mask:", error); 558 | } 559 | } 560 | 561 | // 4. 最后绘制选择框 562 | if (this.selectedLayers.includes(layer)) { 563 | this.drawSelectionFrame(layer); 564 | } 565 | 566 | ctx.restore(); 567 | }); 568 | 569 | this.ctx.drawImage(this.offscreenCanvas, 0, 0); 570 | } 571 | 572 | drawCachedGrid() { 573 | if (this.gridCache.width !== this.width || 574 | this.gridCache.height !== this.height) { 575 | this.gridCache.width = this.width; 576 | this.gridCache.height = this.height; 577 | 578 | const ctx = this.gridCacheCtx; 579 | const gridSize = 20; 580 | 581 | ctx.beginPath(); 582 | ctx.strokeStyle = '#e0e0e0'; 583 | ctx.lineWidth = 0.5; 584 | 585 | for(let y = 0; y < this.height; y += gridSize) { 586 | ctx.moveTo(0, y); 587 | ctx.lineTo(this.width, y); 588 | } 589 | 590 | for(let x = 0; x < this.width; x += gridSize) { 591 | ctx.moveTo(x, 0); 592 | ctx.lineTo(x, this.height); 593 | } 594 | 595 | ctx.stroke(); 596 | } 597 | 598 | this.offscreenCtx.drawImage(this.gridCache, 0, 0); 599 | } 600 | 601 | drawSelectionFrame(layer) { 602 | const ctx = this.offscreenCtx; 603 | 604 | ctx.beginPath(); 605 | 606 | ctx.rect(-layer.width/2, -layer.height/2, layer.width, layer.height); 607 | 608 | ctx.moveTo(0, -layer.height/2); 609 | ctx.lineTo(0, -layer.height/2 - 20); 610 | 611 | ctx.strokeStyle = '#00ff00'; 612 | ctx.lineWidth = 2; 613 | ctx.stroke(); 614 | 615 | ctx.beginPath(); 616 | 617 | const points = [ 618 | {x: 0, y: -layer.height/2 - 20}, 619 | {x: -layer.width/2, y: -layer.height/2}, 620 | {x: layer.width/2, y: -layer.height/2}, 621 | {x: layer.width/2, y: layer.height/2}, 622 | {x: -layer.width/2, y: layer.height/2} 623 | ]; 624 | 625 | points.forEach(point => { 626 | ctx.moveTo(point.x, point.y); 627 | ctx.arc(point.x, point.y, 5, 0, Math.PI * 2); 628 | }); 629 | 630 | ctx.fillStyle = '#ffffff'; 631 | ctx.fill(); 632 | ctx.stroke(); 633 | } 634 | 635 | async saveToServer(fileName) { 636 | return new Promise((resolve) => { 637 | // 创建临时画布 638 | const tempCanvas = document.createElement('canvas'); 639 | const maskCanvas = document.createElement('canvas'); 640 | tempCanvas.width = this.width; 641 | tempCanvas.height = this.height; 642 | maskCanvas.width = this.width; 643 | maskCanvas.height = this.height; 644 | 645 | const tempCtx = tempCanvas.getContext('2d'); 646 | const maskCtx = maskCanvas.getContext('2d'); 647 | 648 | // 填充白色背景 649 | tempCtx.fillStyle = '#ffffff'; 650 | tempCtx.fillRect(0, 0, this.width, this.height); 651 | 652 | // 填充黑色背景作为遮罩的基础 653 | maskCtx.fillStyle = '#000000'; 654 | maskCtx.fillRect(0, 0, this.width, this.height); 655 | 656 | // 按照zIndex顺序绘制所有图层 657 | this.layers.sort((a, b) => a.zIndex - b.zIndex).forEach(layer => { 658 | // 绘制主图像,包含混合模式和透明度 659 | tempCtx.save(); 660 | 661 | // 应用混合模式和透明度 662 | tempCtx.globalCompositeOperation = layer.blendMode || 'normal'; 663 | tempCtx.globalAlpha = layer.opacity !== undefined ? layer.opacity : 1; 664 | 665 | tempCtx.translate(layer.x + layer.width/2, layer.y + layer.height/2); 666 | tempCtx.rotate(layer.rotation * Math.PI / 180); 667 | tempCtx.drawImage( 668 | layer.image, 669 | -layer.width/2, 670 | -layer.height/2, 671 | layer.width, 672 | layer.height 673 | ); 674 | tempCtx.restore(); 675 | 676 | // 处理遮罩 677 | maskCtx.save(); 678 | maskCtx.translate(layer.x + layer.width/2, layer.y + layer.height/2); 679 | maskCtx.rotate(layer.rotation * Math.PI / 180); 680 | maskCtx.globalCompositeOperation = 'lighter'; 681 | 682 | // 如果图层有遮罩,使用它 683 | if (layer.mask) { 684 | maskCtx.drawImage(layer.mask, -layer.width/2, -layer.height/2, layer.width, layer.height); 685 | } else { 686 | // 如果没有遮罩,使用图层的alpha通道和透明度值 687 | const layerCanvas = document.createElement('canvas'); 688 | layerCanvas.width = layer.width; 689 | layerCanvas.height = layer.height; 690 | const layerCtx = layerCanvas.getContext('2d'); 691 | layerCtx.drawImage(layer.image, 0, 0, layer.width, layer.height); 692 | const imageData = layerCtx.getImageData(0, 0, layer.width, layer.height); 693 | 694 | // 创建遮罩画布 695 | const alphaCanvas = document.createElement('canvas'); 696 | alphaCanvas.width = layer.width; 697 | alphaCanvas.height = layer.height; 698 | const alphaCtx = alphaCanvas.getContext('2d'); 699 | const alphaData = alphaCtx.createImageData(layer.width, layer.height); 700 | 701 | // 提取alpha通道并应用图层透明度 702 | for (let i = 0; i < imageData.data.length; i += 4) { 703 | const alpha = imageData.data[i + 3] * (layer.opacity !== undefined ? layer.opacity : 1); 704 | alphaData.data[i] = alphaData.data[i + 1] = alphaData.data[i + 2] = alpha; 705 | alphaData.data[i + 3] = 255; 706 | } 707 | 708 | alphaCtx.putImageData(alphaData, 0, 0); 709 | maskCtx.drawImage(alphaCanvas, -layer.width/2, -layer.height/2, layer.width, layer.height); 710 | } 711 | maskCtx.restore(); 712 | }); 713 | 714 | // 反转最终的遮罩 715 | const finalMaskData = maskCtx.getImageData(0, 0, this.width, this.height); 716 | for (let i = 0; i < finalMaskData.data.length; i += 4) { 717 | finalMaskData.data[i] = 718 | finalMaskData.data[i + 1] = 719 | finalMaskData.data[i + 2] = 255 - finalMaskData.data[i]; 720 | finalMaskData.data[i + 3] = 255; 721 | } 722 | maskCtx.putImageData(finalMaskData, 0, 0); 723 | 724 | // 保存主图像和遮罩 725 | tempCanvas.toBlob(async (blob) => { 726 | const formData = new FormData(); 727 | formData.append("image", blob, fileName); 728 | formData.append("overwrite", "true"); 729 | 730 | try { 731 | const resp = await fetch("/upload/image", { 732 | method: "POST", 733 | body: formData, 734 | }); 735 | 736 | if (resp.status === 200) { 737 | // 保存遮罩图像 738 | maskCanvas.toBlob(async (maskBlob) => { 739 | const maskFormData = new FormData(); 740 | const maskFileName = fileName.replace('.png', '_mask.png'); 741 | maskFormData.append("image", maskBlob, maskFileName); 742 | maskFormData.append("overwrite", "true"); 743 | 744 | try { 745 | const maskResp = await fetch("/upload/image", { 746 | method: "POST", 747 | body: maskFormData, 748 | }); 749 | 750 | if (maskResp.status === 200) { 751 | const data = await resp.json(); 752 | this.widget.value = data.name; 753 | resolve(true); 754 | } else { 755 | console.error("Error saving mask: " + maskResp.status); 756 | resolve(false); 757 | } 758 | } catch (error) { 759 | console.error("Error saving mask:", error); 760 | resolve(false); 761 | } 762 | }, "image/png"); 763 | } else { 764 | console.error(resp.status + " - " + resp.statusText); 765 | resolve(false); 766 | } 767 | } catch (error) { 768 | console.error(error); 769 | resolve(false); 770 | } 771 | }, "image/png"); 772 | }); 773 | } 774 | 775 | moveLayerUp() { 776 | if (!this.selectedLayer) return; 777 | const index = this.layers.indexOf(this.selectedLayer); 778 | if (index < this.layers.length - 1) { 779 | const temp = this.layers[index].zIndex; 780 | this.layers[index].zIndex = this.layers[index + 1].zIndex; 781 | this.layers[index + 1].zIndex = temp; 782 | [this.layers[index], this.layers[index + 1]] = [this.layers[index + 1], this.layers[index]]; 783 | this.render(); 784 | } 785 | } 786 | 787 | moveLayerDown() { 788 | if (!this.selectedLayer) return; 789 | const index = this.layers.indexOf(this.selectedLayer); 790 | if (index > 0) { 791 | const temp = this.layers[index].zIndex; 792 | this.layers[index].zIndex = this.layers[index - 1].zIndex; 793 | this.layers[index - 1].zIndex = temp; 794 | [this.layers[index], this.layers[index - 1]] = [this.layers[index - 1], this.layers[index]]; 795 | this.render(); 796 | } 797 | } 798 | 799 | getLayerAtPosition(x, y) { 800 | // 获取画布的实际显示尺寸和位置 801 | const rect = this.canvas.getBoundingClientRect(); 802 | 803 | // 计算画布的缩放比例 804 | const displayWidth = rect.width; 805 | const displayHeight = rect.height; 806 | const scaleX = this.width / displayWidth; 807 | const scaleY = this.height / displayHeight; 808 | 809 | // 计算鼠标在画布上的实际位置 810 | const canvasX = (x) * scaleX; 811 | const canvasY = (y) * scaleY; 812 | 813 | // 从上层到下层遍历所有图层 814 | for (let i = this.layers.length - 1; i >= 0; i--) { 815 | const layer = this.layers[i]; 816 | 817 | // 计算旋转后的点击位置 818 | const centerX = layer.x + layer.width/2; 819 | const centerY = layer.y + layer.height/2; 820 | const rad = -layer.rotation * Math.PI / 180; 821 | 822 | // 将点击坐标转换到图层的本地坐标系 823 | const dx = canvasX - centerX; 824 | const dy = canvasY - centerY; 825 | const rotatedX = dx * Math.cos(rad) - dy * Math.sin(rad) + centerX; 826 | const rotatedY = dx * Math.sin(rad) + dy * Math.cos(rad) + centerY; 827 | 828 | // 检查点击位置是否在图层范围内 829 | if (rotatedX >= layer.x && 830 | rotatedX <= layer.x + layer.width && 831 | rotatedY >= layer.y && 832 | rotatedY <= layer.y + layer.height) { 833 | 834 | // 创建临时画布来检查透明度 835 | const tempCanvas = document.createElement('canvas'); 836 | const tempCtx = tempCanvas.getContext('2d'); 837 | tempCanvas.width = layer.width; 838 | tempCanvas.height = layer.height; 839 | 840 | // 绘制图层到临时画布 841 | tempCtx.save(); 842 | tempCtx.clearRect(0, 0, layer.width, layer.height); 843 | tempCtx.drawImage( 844 | layer.image, 845 | 0, 846 | 0, 847 | layer.width, 848 | layer.height 849 | ); 850 | tempCtx.restore(); 851 | 852 | // 获取点击位置的像素数据 853 | const localX = rotatedX - layer.x; 854 | const localY = rotatedY - layer.y; 855 | 856 | try { 857 | const pixel = tempCtx.getImageData( 858 | Math.round(localX), 859 | Math.round(localY), 860 | 1, 1 861 | ).data; 862 | // 检查像素的alpha值 863 | if (pixel[3] > 10) { 864 | return { 865 | layer: layer, 866 | localX: localX, 867 | localY: localY 868 | }; 869 | } 870 | } catch(e) { 871 | console.error("Error checking pixel transparency:", e); 872 | } 873 | } 874 | } 875 | return null; 876 | } 877 | 878 | getResizeHandle(x, y) { 879 | if (!this.selectedLayer) return null; 880 | 881 | const handleRadius = 5; 882 | const handles = { 883 | 'nw': {x: this.selectedLayer.x, y: this.selectedLayer.y}, 884 | 'ne': {x: this.selectedLayer.x + this.selectedLayer.width, y: this.selectedLayer.y}, 885 | 'se': {x: this.selectedLayer.x + this.selectedLayer.width, y: this.selectedLayer.y + this.selectedLayer.height}, 886 | 'sw': {x: this.selectedLayer.x, y: this.selectedLayer.y + this.selectedLayer.height} 887 | }; 888 | 889 | for (const [position, point] of Object.entries(handles)) { 890 | if (Math.sqrt(Math.pow(x - point.x, 2) + Math.pow(y - point.y, 2)) <= handleRadius) { 891 | return position; 892 | } 893 | } 894 | return null; 895 | } 896 | 897 | // 修改水平镜像方法 898 | mirrorHorizontal() { 899 | if (!this.selectedLayer) return; 900 | 901 | // 创建临时画布 902 | const tempCanvas = document.createElement('canvas'); 903 | const tempCtx = tempCanvas.getContext('2d'); 904 | tempCanvas.width = this.selectedLayer.image.width; 905 | tempCanvas.height = this.selectedLayer.image.height; 906 | 907 | // 水平翻转绘制 908 | tempCtx.translate(tempCanvas.width, 0); 909 | tempCtx.scale(-1, 1); 910 | tempCtx.drawImage(this.selectedLayer.image, 0, 0); 911 | 912 | // 创建新图像 913 | const newImage = new Image(); 914 | newImage.onload = () => { 915 | this.selectedLayer.image = newImage; 916 | this.render(); 917 | }; 918 | newImage.src = tempCanvas.toDataURL(); 919 | } 920 | 921 | // 修改垂直镜像方法 922 | mirrorVertical() { 923 | if (!this.selectedLayer) return; 924 | 925 | // 创建临时画布 926 | const tempCanvas = document.createElement('canvas'); 927 | const tempCtx = tempCanvas.getContext('2d'); 928 | tempCanvas.width = this.selectedLayer.image.width; 929 | tempCanvas.height = this.selectedLayer.image.height; 930 | 931 | // 垂直翻转绘制 932 | tempCtx.translate(0, tempCanvas.height); 933 | tempCtx.scale(1, -1); 934 | tempCtx.drawImage(this.selectedLayer.image, 0, 0); 935 | 936 | // 创建新图像 937 | const newImage = new Image(); 938 | newImage.onload = () => { 939 | this.selectedLayer.image = newImage; 940 | this.render(); 941 | }; 942 | newImage.src = tempCanvas.toDataURL(); 943 | } 944 | 945 | async getLayerImageData(layer) { 946 | try { 947 | const tempCanvas = document.createElement('canvas'); 948 | const tempCtx = tempCanvas.getContext('2d'); 949 | 950 | // 设置画布尺寸 951 | tempCanvas.width = layer.width; 952 | tempCanvas.height = layer.height; 953 | 954 | // 清除画布 955 | tempCtx.clearRect(0, 0, tempCanvas.width, tempCanvas.height); 956 | 957 | // 绘制图层 958 | tempCtx.save(); 959 | tempCtx.translate(layer.width/2, layer.height/2); 960 | tempCtx.rotate(layer.rotation * Math.PI / 180); 961 | tempCtx.drawImage( 962 | layer.image, 963 | -layer.width/2, 964 | -layer.height/2, 965 | layer.width, 966 | layer.height 967 | ); 968 | tempCtx.restore(); 969 | 970 | // 获取base64数据 971 | const dataUrl = tempCanvas.toDataURL('image/png'); 972 | if (!dataUrl.startsWith('data:image/png;base64,')) { 973 | throw new Error("Invalid image data format"); 974 | } 975 | 976 | return dataUrl; 977 | } catch (error) { 978 | console.error("Error getting layer image data:", error); 979 | throw error; 980 | } 981 | } 982 | 983 | // 添加带遮罩的图层 984 | addMattedLayer(image, mask) { 985 | const layer = { 986 | image: image, 987 | mask: mask, 988 | x: 0, 989 | y: 0, 990 | width: image.width, 991 | height: image.height, 992 | rotation: 0, 993 | zIndex: this.layers.length 994 | }; 995 | 996 | this.layers.push(layer); 997 | this.selectedLayer = layer; 998 | this.render(); 999 | } 1000 | 1001 | processInputData(nodeData) { 1002 | if (nodeData.input_image) { 1003 | this.addInputImage(nodeData.input_image); 1004 | } 1005 | if (nodeData.input_mask) { 1006 | this.addInputMask(nodeData.input_mask); 1007 | } 1008 | } 1009 | 1010 | addInputImage(imageData) { 1011 | const layer = new ImageLayer(imageData); 1012 | this.layers.push(layer); 1013 | this.updateCanvas(); 1014 | } 1015 | 1016 | addInputMask(maskData) { 1017 | if (this.inputImage) { 1018 | const mask = new MaskLayer(maskData); 1019 | mask.linkToLayer(this.inputImage); 1020 | this.masks.push(mask); 1021 | this.updateCanvas(); 1022 | } 1023 | } 1024 | 1025 | async addInputToCanvas(inputImage, inputMask) { 1026 | try { 1027 | console.log("Adding input to canvas:", { inputImage }); 1028 | 1029 | // 创建临时画布 1030 | const tempCanvas = document.createElement('canvas'); 1031 | const tempCtx = tempCanvas.getContext('2d'); 1032 | tempCanvas.width = inputImage.width; 1033 | tempCanvas.height = inputImage.height; 1034 | 1035 | // 将数据绘制到临时画布 1036 | const imgData = new ImageData( 1037 | inputImage.data, 1038 | inputImage.width, 1039 | inputImage.height 1040 | ); 1041 | tempCtx.putImageData(imgData, 0, 0); 1042 | 1043 | // 创建新图像 1044 | const image = new Image(); 1045 | await new Promise((resolve, reject) => { 1046 | image.onload = resolve; 1047 | image.onerror = reject; 1048 | image.src = tempCanvas.toDataURL(); 1049 | }); 1050 | 1051 | // 计算缩放比例 1052 | const scale = Math.min( 1053 | this.width / inputImage.width * 0.8, 1054 | this.height / inputImage.height * 0.8 1055 | ); 1056 | 1057 | // 创建新图层 1058 | const layer = { 1059 | image: image, 1060 | x: (this.width - inputImage.width * scale) / 2, 1061 | y: (this.height - inputImage.height * scale) / 2, 1062 | width: inputImage.width * scale, 1063 | height: inputImage.height * scale, 1064 | rotation: 0, 1065 | zIndex: this.layers.length 1066 | }; 1067 | 1068 | // 如果有遮罩数据,添加到图层 1069 | if (inputMask) { 1070 | layer.mask = inputMask.data; 1071 | } 1072 | 1073 | // 添加图层并选中 1074 | this.layers.push(layer); 1075 | this.selectedLayer = layer; 1076 | 1077 | // 渲染画布 1078 | this.render(); 1079 | console.log("Layer added successfully"); 1080 | 1081 | return true; 1082 | 1083 | } catch (error) { 1084 | console.error("Error in addInputToCanvas:", error); 1085 | throw error; 1086 | } 1087 | } 1088 | 1089 | // 改进图像转换方法 1090 | async convertTensorToImage(tensor) { 1091 | try { 1092 | console.log("Converting tensor to image:", tensor); 1093 | 1094 | if (!tensor || !tensor.data || !tensor.width || !tensor.height) { 1095 | throw new Error("Invalid tensor data"); 1096 | } 1097 | 1098 | // 创建临时画布 1099 | const canvas = document.createElement('canvas'); 1100 | const ctx = canvas.getContext('2d'); 1101 | canvas.width = tensor.width; 1102 | canvas.height = tensor.height; 1103 | 1104 | // 创建像数据 1105 | const imageData = new ImageData( 1106 | new Uint8ClampedArray(tensor.data), 1107 | tensor.width, 1108 | tensor.height 1109 | ); 1110 | 1111 | // 将数据绘制到画布 1112 | ctx.putImageData(imageData, 0, 0); 1113 | 1114 | // 创建新图像 1115 | return new Promise((resolve, reject) => { 1116 | const img = new Image(); 1117 | img.onload = () => resolve(img); 1118 | img.onerror = (e) => reject(new Error("Failed to load image: " + e)); 1119 | img.src = canvas.toDataURL(); 1120 | }); 1121 | } catch (error) { 1122 | console.error("Error converting tensor to image:", error); 1123 | throw error; 1124 | } 1125 | } 1126 | 1127 | // 改进遮罩转换方法 1128 | async convertTensorToMask(tensor) { 1129 | if (!tensor || !tensor.data) { 1130 | throw new Error("Invalid mask tensor"); 1131 | } 1132 | 1133 | try { 1134 | // 确保数据是Float32Array 1135 | return new Float32Array(tensor.data); 1136 | } catch (error) { 1137 | throw new Error(`Mask conversion failed: ${error.message}`); 1138 | } 1139 | } 1140 | 1141 | // 改进数据初始化方法 1142 | async initNodeData() { 1143 | try { 1144 | console.log("Starting node data initialization..."); 1145 | 1146 | // 检查节点和输入是否存在 1147 | if (!this.node || !this.node.inputs) { 1148 | console.log("Node or inputs not ready"); 1149 | return this.scheduleDataCheck(); 1150 | } 1151 | 1152 | // 检查图像��入 1153 | if (this.node.inputs[0] && this.node.inputs[0].link) { 1154 | const imageLinkId = this.node.inputs[0].link; 1155 | const imageData = app.nodeOutputs[imageLinkId]; 1156 | 1157 | if (imageData) { 1158 | console.log("Found image data:", imageData); 1159 | await this.processImageData(imageData); 1160 | this.dataInitialized = true; 1161 | } else { 1162 | console.log("Image data not available yet"); 1163 | return this.scheduleDataCheck(); 1164 | } 1165 | } 1166 | 1167 | // 检查遮罩输入 1168 | if (this.node.inputs[1] && this.node.inputs[1].link) { 1169 | const maskLinkId = this.node.inputs[1].link; 1170 | const maskData = app.nodeOutputs[maskLinkId]; 1171 | 1172 | if (maskData) { 1173 | console.log("Found mask data:", maskData); 1174 | await this.processMaskData(maskData); 1175 | } 1176 | } 1177 | 1178 | } catch (error) { 1179 | console.error("Error in initNodeData:", error); 1180 | return this.scheduleDataCheck(); 1181 | } 1182 | } 1183 | 1184 | // 添加数据检查调度方法 1185 | scheduleDataCheck() { 1186 | if (this.pendingDataCheck) { 1187 | clearTimeout(this.pendingDataCheck); 1188 | } 1189 | 1190 | this.pendingDataCheck = setTimeout(() => { 1191 | this.pendingDataCheck = null; 1192 | if (!this.dataInitialized) { 1193 | this.initNodeData(); 1194 | } 1195 | }, 1000); // 1秒后重试 1196 | } 1197 | 1198 | // 修改图像数据处理方法 1199 | async processImageData(imageData) { 1200 | try { 1201 | if (!imageData) return; 1202 | 1203 | console.log("Processing image data:", { 1204 | type: typeof imageData, 1205 | isArray: Array.isArray(imageData), 1206 | shape: imageData.shape, 1207 | hasData: !!imageData.data 1208 | }); 1209 | 1210 | // 处理数组格式 1211 | if (Array.isArray(imageData)) { 1212 | imageData = imageData[0]; 1213 | } 1214 | 1215 | // 验证数据格式 1216 | if (!imageData.shape || !imageData.data) { 1217 | throw new Error("Invalid image data format"); 1218 | } 1219 | 1220 | // 保持原始尺寸和比例 1221 | const originalWidth = imageData.shape[2]; 1222 | const originalHeight = imageData.shape[1]; 1223 | 1224 | // 计算适当的缩放比例 1225 | const scale = Math.min( 1226 | this.width / originalWidth * 0.8, 1227 | this.height / originalHeight * 0.8 1228 | ); 1229 | 1230 | // 转换数据 1231 | const convertedData = this.convertTensorToImageData(imageData); 1232 | if (convertedData) { 1233 | const image = await this.createImageFromData(convertedData); 1234 | 1235 | // 使用计算的缩放比例添加图层 1236 | this.addScaledLayer(image, scale); 1237 | console.log("Image layer added successfully with scale:", scale); 1238 | } 1239 | } catch (error) { 1240 | console.error("Error processing image data:", error); 1241 | throw error; 1242 | } 1243 | } 1244 | 1245 | // 添加新的缩放图层方法 1246 | addScaledLayer(image, scale) { 1247 | try { 1248 | const scaledWidth = image.width * scale; 1249 | const scaledHeight = image.height * scale; 1250 | 1251 | const layer = { 1252 | image: image, 1253 | x: (this.width - scaledWidth) / 2, 1254 | y: (this.height - scaledHeight) / 2, 1255 | width: scaledWidth, 1256 | height: scaledHeight, 1257 | rotation: 0, 1258 | zIndex: this.layers.length, 1259 | originalWidth: image.width, 1260 | originalHeight: image.height 1261 | }; 1262 | 1263 | this.layers.push(layer); 1264 | this.selectedLayer = layer; 1265 | this.render(); 1266 | 1267 | console.log("Scaled layer added:", { 1268 | originalSize: `${image.width}x${image.height}`, 1269 | scaledSize: `${scaledWidth}x${scaledHeight}`, 1270 | scale: scale 1271 | }); 1272 | } catch (error) { 1273 | console.error("Error adding scaled layer:", error); 1274 | throw error; 1275 | } 1276 | } 1277 | 1278 | // 改进张量转换方法 1279 | convertTensorToImageData(tensor) { 1280 | try { 1281 | const shape = tensor.shape; 1282 | const height = shape[1]; 1283 | const width = shape[2]; 1284 | const channels = shape[3]; 1285 | 1286 | console.log("Converting tensor:", { 1287 | shape: shape, 1288 | dataRange: { 1289 | min: tensor.min_val, 1290 | max: tensor.max_val 1291 | } 1292 | }); 1293 | 1294 | // 创建图像数据 1295 | const imageData = new ImageData(width, height); 1296 | const data = new Uint8ClampedArray(width * height * 4); 1297 | 1298 | // 重建数据结构 1299 | const flatData = tensor.data; 1300 | const pixelCount = width * height; 1301 | 1302 | for (let i = 0; i < pixelCount; i++) { 1303 | const pixelIndex = i * 4; 1304 | const tensorIndex = i * channels; 1305 | 1306 | // 正确处理RGB通道 1307 | for (let c = 0; c < channels; c++) { 1308 | const value = flatData[tensorIndex + c]; 1309 | // 根据实际值范围行映射 1310 | const normalizedValue = (value - tensor.min_val) / (tensor.max_val - tensor.min_val); 1311 | data[pixelIndex + c] = Math.round(normalizedValue * 255); 1312 | } 1313 | 1314 | // Alpha通道 1315 | data[pixelIndex + 3] = 255; 1316 | } 1317 | 1318 | imageData.data.set(data); 1319 | return imageData; 1320 | } catch (error) { 1321 | console.error("Error converting tensor:", error); 1322 | return null; 1323 | } 1324 | } 1325 | 1326 | // 添加图像创建方法 1327 | async createImageFromData(imageData) { 1328 | return new Promise((resolve, reject) => { 1329 | const canvas = document.createElement('canvas'); 1330 | canvas.width = imageData.width; 1331 | canvas.height = imageData.height; 1332 | const ctx = canvas.getContext('2d'); 1333 | ctx.putImageData(imageData, 0, 0); 1334 | 1335 | const img = new Image(); 1336 | img.onload = () => resolve(img); 1337 | img.onerror = reject; 1338 | img.src = canvas.toDataURL(); 1339 | }); 1340 | } 1341 | 1342 | // 添加数据重试机制 1343 | async retryDataLoad(maxRetries = 3, delay = 1000) { 1344 | for (let i = 0; i < maxRetries; i++) { 1345 | try { 1346 | await this.initNodeData(); 1347 | return; 1348 | } catch (error) { 1349 | console.warn(`Retry ${i + 1}/${maxRetries} failed:`, error); 1350 | if (i < maxRetries - 1) { 1351 | await new Promise(resolve => setTimeout(resolve, delay)); 1352 | } 1353 | } 1354 | } 1355 | console.error("Failed to load data after", maxRetries, "retries"); 1356 | } 1357 | 1358 | async processMaskData(maskData) { 1359 | try { 1360 | if (!maskData) return; 1361 | 1362 | console.log("Processing mask data:", maskData); 1363 | 1364 | // 处理数组格式 1365 | if (Array.isArray(maskData)) { 1366 | maskData = maskData[0]; 1367 | } 1368 | 1369 | // 检查数据格式 1370 | if (!maskData.shape || !maskData.data) { 1371 | throw new Error("Invalid mask data format"); 1372 | } 1373 | 1374 | // 如果有选中的图层,应用遮罩 1375 | if (this.selectedLayer) { 1376 | const maskTensor = await this.convertTensorToMask(maskData); 1377 | this.selectedLayer.mask = maskTensor; 1378 | this.render(); 1379 | console.log("Mask applied to selected layer"); 1380 | } 1381 | } catch (error) { 1382 | console.error("Error processing mask data:", error); 1383 | } 1384 | } 1385 | 1386 | async loadImageFromCache(base64Data) { 1387 | return new Promise((resolve, reject) => { 1388 | const img = new Image(); 1389 | img.onload = () => resolve(img); 1390 | img.onerror = reject; 1391 | img.src = base64Data; 1392 | }); 1393 | } 1394 | 1395 | async importImage(cacheData) { 1396 | try { 1397 | console.log("Starting image import with cache data"); 1398 | const img = await this.loadImageFromCache(cacheData.image); 1399 | const mask = cacheData.mask ? await this.loadImageFromCache(cacheData.mask) : null; 1400 | 1401 | // 计算缩放比例 1402 | const scale = Math.min( 1403 | this.width / img.width * 0.8, 1404 | this.height / img.height * 0.8 1405 | ); 1406 | 1407 | // 创建临时画布来合并图像和遮罩 1408 | const tempCanvas = document.createElement('canvas'); 1409 | tempCanvas.width = img.width; 1410 | tempCanvas.height = img.height; 1411 | const tempCtx = tempCanvas.getContext('2d'); 1412 | 1413 | // 绘制图像 1414 | tempCtx.drawImage(img, 0, 0); 1415 | 1416 | // 如果有遮罩,应用遮罩 1417 | if (mask) { 1418 | const imageData = tempCtx.getImageData(0, 0, img.width, img.height); 1419 | const maskCanvas = document.createElement('canvas'); 1420 | maskCanvas.width = img.width; 1421 | maskCanvas.height = img.height; 1422 | const maskCtx = maskCanvas.getContext('2d'); 1423 | maskCtx.drawImage(mask, 0, 0); 1424 | const maskData = maskCtx.getImageData(0, 0, img.width, img.height); 1425 | 1426 | // 应用遮罩到alpha通道 1427 | for (let i = 0; i < imageData.data.length; i += 4) { 1428 | imageData.data[i + 3] = maskData.data[i]; 1429 | } 1430 | 1431 | tempCtx.putImageData(imageData, 0, 0); 1432 | } 1433 | 1434 | // 创��最终图像 1435 | const finalImage = new Image(); 1436 | await new Promise((resolve) => { 1437 | finalImage.onload = resolve; 1438 | finalImage.src = tempCanvas.toDataURL(); 1439 | }); 1440 | 1441 | // 创建新图层 1442 | const layer = { 1443 | image: finalImage, 1444 | x: (this.width - img.width * scale) / 2, 1445 | y: (this.height - img.height * scale) / 2, 1446 | width: img.width * scale, 1447 | height: img.height * scale, 1448 | rotation: 0, 1449 | zIndex: this.layers.length 1450 | }; 1451 | 1452 | this.layers.push(layer); 1453 | this.selectedLayer = layer; 1454 | this.render(); 1455 | 1456 | } catch (error) { 1457 | console.error('Error importing image:', error); 1458 | } 1459 | } 1460 | 1461 | // 修改 showBlendModeMenu 方法 1462 | showBlendModeMenu(x, y) { 1463 | // 移除已存在的菜单 1464 | const existingMenu = document.getElementById('blend-mode-menu'); 1465 | if (existingMenu) { 1466 | document.body.removeChild(existingMenu); 1467 | } 1468 | 1469 | const menu = document.createElement('div'); 1470 | menu.id = 'blend-mode-menu'; 1471 | menu.style.cssText = ` 1472 | position: fixed; 1473 | left: ${x}px; 1474 | top: ${y}px; 1475 | background: #2a2a2a; 1476 | border: 1px solid #3a3a3a; 1477 | border-radius: 4px; 1478 | padding: 5px; 1479 | z-index: 1000; 1480 | box-shadow: 0 2px 10px rgba(0,0,0,0.3); 1481 | `; 1482 | 1483 | this.blendModes.forEach(mode => { 1484 | const container = document.createElement('div'); 1485 | container.className = 'blend-mode-container'; 1486 | container.style.cssText = ` 1487 | margin-bottom: 5px; 1488 | `; 1489 | 1490 | const option = document.createElement('div'); 1491 | option.style.cssText = ` 1492 | padding: 5px 10px; 1493 | color: white; 1494 | cursor: pointer; 1495 | transition: background-color 0.2s; 1496 | `; 1497 | option.textContent = `${mode.label} (${mode.name})`; 1498 | 1499 | // 创建滑动条,使用当前图层的透明度值 1500 | const slider = document.createElement('input'); 1501 | slider.type = 'range'; 1502 | slider.min = '0'; 1503 | slider.max = '100'; 1504 | // 使用当前图层的透明度值,如果存在的话 1505 | slider.value = this.selectedLayer.opacity ? Math.round(this.selectedLayer.opacity * 100) : 100; 1506 | slider.style.cssText = ` 1507 | width: 100%; 1508 | margin: 5px 0; 1509 | display: none; 1510 | `; 1511 | 1512 | // 如果是当前图层的混合模式,显示滑动条 1513 | if (this.selectedLayer.blendMode === mode.name) { 1514 | slider.style.display = 'block'; 1515 | option.style.backgroundColor = '#3a3a3a'; 1516 | } 1517 | 1518 | // 修改点击事件 1519 | option.onclick = () => { 1520 | // 隐藏所有其他滑动条 1521 | menu.querySelectorAll('input[type="range"]').forEach(s => { 1522 | s.style.display = 'none'; 1523 | }); 1524 | menu.querySelectorAll('.blend-mode-container div').forEach(d => { 1525 | d.style.backgroundColor = ''; 1526 | }); 1527 | 1528 | // 显示当前选项的滑动条 1529 | slider.style.display = 'block'; 1530 | option.style.backgroundColor = '#3a3a3a'; 1531 | 1532 | // 设置当前选中的混合模式 1533 | if (this.selectedLayer) { 1534 | this.selectedLayer.blendMode = mode.name; 1535 | this.render(); 1536 | } 1537 | }; 1538 | 1539 | // 添加滑动条的input事件(实时更新) 1540 | slider.addEventListener('input', () => { 1541 | if (this.selectedLayer) { 1542 | this.selectedLayer.opacity = slider.value / 100; 1543 | this.render(); 1544 | } 1545 | }); 1546 | 1547 | // 添加滑动条的change事件(结束拖动时保存状态) 1548 | slider.addEventListener('change', async () => { 1549 | if (this.selectedLayer) { 1550 | this.selectedLayer.opacity = slider.value / 100; 1551 | this.render(); 1552 | // 保存到服务器并更新节点 1553 | await this.saveToServer(this.widget.value); 1554 | if (this.node) { 1555 | app.graph.runStep(); 1556 | } 1557 | } 1558 | }); 1559 | 1560 | container.appendChild(option); 1561 | container.appendChild(slider); 1562 | menu.appendChild(container); 1563 | }); 1564 | 1565 | document.body.appendChild(menu); 1566 | 1567 | // 点击其他地方关闭菜单 1568 | const closeMenu = (e) => { 1569 | if (!menu.contains(e.target)) { 1570 | document.body.removeChild(menu); 1571 | document.removeEventListener('mousedown', closeMenu); 1572 | } 1573 | }; 1574 | setTimeout(() => { 1575 | document.addEventListener('mousedown', closeMenu); 1576 | }, 0); 1577 | } 1578 | 1579 | handleBlendModeSelection(mode) { 1580 | if (this.selectedBlendMode === mode && !this.isAdjustingOpacity) { 1581 | // 第二次点击,应用效果 1582 | this.applyBlendMode(mode, this.blendOpacity); 1583 | this.closeBlendModeMenu(); 1584 | } else { 1585 | // 第一次点击,显示透明度调整器 1586 | this.selectedBlendMode = mode; 1587 | this.isAdjustingOpacity = true; 1588 | this.showOpacitySlider(mode); 1589 | } 1590 | } 1591 | 1592 | showOpacitySlider(mode) { 1593 | // 创建滑动条 1594 | const slider = document.createElement('input'); 1595 | slider.type = 'range'; 1596 | slider.min = '0'; 1597 | slider.max = '100'; 1598 | slider.value = this.blendOpacity; 1599 | slider.className = 'blend-opacity-slider'; 1600 | 1601 | slider.addEventListener('input', (e) => { 1602 | this.blendOpacity = parseInt(e.target.value); 1603 | // 可以添加实时预览效果 1604 | }); 1605 | 1606 | // 将滑动条添加到对应的混合模式选项下 1607 | const modeElement = document.querySelector(`[data-blend-mode="${mode}"]`); 1608 | if (modeElement) { 1609 | modeElement.appendChild(slider); 1610 | } 1611 | } 1612 | 1613 | applyBlendMode(mode, opacity) { 1614 | // 应用混合模式和透明度 1615 | this.currentLayer.style.mixBlendMode = mode; 1616 | this.currentLayer.style.opacity = opacity / 100; 1617 | 1618 | // 清理状态 1619 | this.selectedBlendMode = null; 1620 | this.isAdjustingOpacity = false; 1621 | } 1622 | } -------------------------------------------------------------------------------- /js/Canvas_view.js: -------------------------------------------------------------------------------- 1 | import { app } from "../../scripts/app.js"; 2 | import { api } from "../../scripts/api.js"; 3 | import { $el } from "../../scripts/ui.js"; 4 | import { Canvas } from "./Canvas.js"; 5 | 6 | async function createCanvasWidget(node, widget, app) { 7 | const canvas = new Canvas(node, widget); 8 | 9 | // 添加全局样式 10 | const style = document.createElement('style'); 11 | style.textContent = ` 12 | .painter-button { 13 | background: linear-gradient(to bottom, #4a4a4a, #3a3a3a); 14 | border: 1px solid #2a2a2a; 15 | border-radius: 4px; 16 | color: #ffffff; 17 | padding: 6px 12px; 18 | font-size: 12px; 19 | cursor: pointer; 20 | transition: all 0.2s ease; 21 | min-width: 80px; 22 | text-align: center; 23 | margin: 2px; 24 | text-shadow: 0 1px 1px rgba(0,0,0,0.2); 25 | } 26 | 27 | .painter-button:hover { 28 | background: linear-gradient(to bottom, #5a5a5a, #4a4a4a); 29 | box-shadow: 0 1px 3px rgba(0,0,0,0.2); 30 | } 31 | 32 | .painter-button:active { 33 | background: linear-gradient(to bottom, #3a3a3a, #4a4a4a); 34 | transform: translateY(1px); 35 | } 36 | 37 | .painter-button.primary { 38 | background: linear-gradient(to bottom, #4a6cd4, #3a5cc4); 39 | border-color: #2a4cb4; 40 | } 41 | 42 | .painter-button.primary:hover { 43 | background: linear-gradient(to bottom, #5a7ce4, #4a6cd4); 44 | } 45 | 46 | .painter-controls { 47 | background: linear-gradient(to bottom, #404040, #383838); 48 | border-bottom: 1px solid #2a2a2a; 49 | box-shadow: 0 2px 4px rgba(0,0,0,0.1); 50 | padding: 8px; 51 | display: flex; 52 | gap: 6px; 53 | flex-wrap: wrap; 54 | align-items: center; 55 | } 56 | 57 | .painter-container { 58 | background: #607080; /* 带蓝色的灰色背景 */ 59 | border: 1px solid #4a5a6a; 60 | border-radius: 6px; 61 | box-shadow: inset 0 0 10px rgba(0,0,0,0.1); 62 | } 63 | 64 | .painter-dialog { 65 | background: #404040; 66 | border-radius: 8px; 67 | box-shadow: 0 4px 12px rgba(0,0,0,0.3); 68 | padding: 20px; 69 | color: #ffffff; 70 | } 71 | 72 | .painter-dialog input { 73 | background: #303030; 74 | border: 1px solid #505050; 75 | border-radius: 4px; 76 | color: #ffffff; 77 | padding: 4px 8px; 78 | margin: 4px; 79 | width: 80px; 80 | } 81 | 82 | .painter-dialog button { 83 | background: #505050; 84 | border: 1px solid #606060; 85 | border-radius: 4px; 86 | color: #ffffff; 87 | padding: 4px 12px; 88 | margin: 4px; 89 | cursor: pointer; 90 | } 91 | 92 | .painter-dialog button:hover { 93 | background: #606060; 94 | } 95 | 96 | .blend-opacity-slider { 97 | width: 100%; 98 | margin: 5px 0; 99 | display: none; 100 | } 101 | 102 | .blend-mode-active .blend-opacity-slider { 103 | display: block; 104 | } 105 | 106 | .blend-mode-item { 107 | padding: 5px; 108 | cursor: pointer; 109 | position: relative; 110 | } 111 | 112 | .blend-mode-item.active { 113 | background-color: rgba(0,0,0,0.1); 114 | } 115 | `; 116 | document.head.appendChild(style); 117 | 118 | // 修改控制面板,使其高度自适应 119 | const controlPanel = $el("div.painterControlPanel", {}, [ 120 | $el("div.controls.painter-controls", { 121 | style: { 122 | position: "absolute", 123 | top: "0", 124 | left: "0", 125 | right: "0", 126 | minHeight: "50px", // 改为最小高度 127 | zIndex: "10", 128 | background: "linear-gradient(to bottom, #404040, #383838)", 129 | borderBottom: "1px solid #2a2a2a", 130 | boxShadow: "0 2px 4px rgba(0,0,0,0.1)", 131 | padding: "8px", 132 | display: "flex", 133 | gap: "6px", 134 | flexWrap: "wrap", 135 | alignItems: "center" 136 | }, 137 | // 添加监听器来动态整画布容器的位置 138 | onresize: (entries) => { 139 | const controlsHeight = entries[0].target.offsetHeight; 140 | canvasContainer.style.top = (controlsHeight + 10) + "px"; 141 | } 142 | }, [ 143 | $el("button.painter-button.primary", { 144 | textContent: "Add Image", 145 | onclick: () => { 146 | const input = document.createElement('input'); 147 | input.type = 'file'; 148 | input.accept = 'image/*'; 149 | input.multiple = true; 150 | input.onchange = async (e) => { 151 | for (const file of e.target.files) { 152 | // 创建图片对象 153 | const img = new Image(); 154 | img.onload = async () => { 155 | // 计算适当的缩放比例 156 | const scale = Math.min( 157 | canvas.width / img.width * 0.8, 158 | canvas.height / img.height * 0.8 159 | ); 160 | 161 | // 创建新图层 162 | const layer = { 163 | image: img, 164 | x: (canvas.width - img.width * scale) / 2, 165 | y: (canvas.height - img.height * scale) / 2, 166 | width: img.width * scale, 167 | height: img.height * scale, 168 | rotation: 0, 169 | zIndex: canvas.layers.length 170 | }; 171 | 172 | // 添加图层并选中 173 | canvas.layers.push(layer); 174 | canvas.selectedLayer = layer; 175 | 176 | // 渲染画布 177 | canvas.render(); 178 | 179 | // 立即保存并触发输出更新 180 | await canvas.saveToServer(widget.value); 181 | 182 | // 触发节点更新 183 | app.graph.runStep(); 184 | }; 185 | img.src = URL.createObjectURL(file); 186 | } 187 | }; 188 | input.click(); 189 | } 190 | }), 191 | $el("button.painter-button.primary", { 192 | textContent: "Import Input", 193 | onclick: async () => { 194 | try { 195 | console.log("Import Input clicked"); 196 | console.log("Node ID:", node.id); 197 | 198 | const response = await fetch(`/ycnode/get_canvas_data/${node.id}`); 199 | console.log("Response status:", response.status); 200 | 201 | const result = await response.json(); 202 | console.log("Full response data:", result); 203 | 204 | if (result.success && result.data) { 205 | if (result.data.image) { 206 | console.log("Found image data, importing..."); 207 | await canvas.importImage({ 208 | image: result.data.image, 209 | mask: result.data.mask 210 | }); 211 | await canvas.saveToServer(widget.value); 212 | app.graph.runStep(); 213 | } else { 214 | throw new Error("No image data found in cache"); 215 | } 216 | } else { 217 | throw new Error("Invalid response format"); 218 | } 219 | 220 | } catch (error) { 221 | console.error("Error importing input:", error); 222 | alert(`Failed to import input: ${error.message}`); 223 | } 224 | } 225 | }), 226 | $el("button.painter-button", { 227 | textContent: "Canvas Size", 228 | onclick: () => { 229 | const dialog = $el("div.painter-dialog", { 230 | style: { 231 | position: 'fixed', 232 | left: '50%', 233 | top: '50%', 234 | transform: 'translate(-50%, -50%)', 235 | zIndex: '1000' 236 | } 237 | }, [ 238 | $el("div", { 239 | style: { 240 | color: "white", 241 | marginBottom: "10px" 242 | } 243 | }, [ 244 | $el("label", { 245 | style: { 246 | marginRight: "5px" 247 | } 248 | }, [ 249 | $el("span", {}, ["Width: "]) 250 | ]), 251 | $el("input", { 252 | type: "number", 253 | id: "canvas-width", 254 | value: canvas.width, 255 | min: "1", 256 | max: "4096" 257 | }) 258 | ]), 259 | $el("div", { 260 | style: { 261 | color: "white", 262 | marginBottom: "10px" 263 | } 264 | }, [ 265 | $el("label", { 266 | style: { 267 | marginRight: "5px" 268 | } 269 | }, [ 270 | $el("span", {}, ["Height: "]) 271 | ]), 272 | $el("input", { 273 | type: "number", 274 | id: "canvas-height", 275 | value: canvas.height, 276 | min: "1", 277 | max: "4096" 278 | }) 279 | ]), 280 | $el("div", { 281 | style: { 282 | textAlign: "right" 283 | } 284 | }, [ 285 | $el("button", { 286 | id: "cancel-size", 287 | textContent: "Cancel" 288 | }), 289 | $el("button", { 290 | id: "confirm-size", 291 | textContent: "OK" 292 | }) 293 | ]) 294 | ]); 295 | document.body.appendChild(dialog); 296 | 297 | document.getElementById('confirm-size').onclick = () => { 298 | const width = parseInt(document.getElementById('canvas-width').value) || canvas.width; 299 | const height = parseInt(document.getElementById('canvas-height').value) || canvas.height; 300 | canvas.updateCanvasSize(width, height); 301 | document.body.removeChild(dialog); 302 | }; 303 | 304 | document.getElementById('cancel-size').onclick = () => { 305 | document.body.removeChild(dialog); 306 | }; 307 | } 308 | }), 309 | $el("button.painter-button", { 310 | textContent: "Remove Layer", 311 | onclick: () => { 312 | const index = canvas.layers.indexOf(canvas.selectedLayer); 313 | canvas.removeLayer(index); 314 | } 315 | }), 316 | $el("button.painter-button", { 317 | textContent: "Rotate +90°", 318 | onclick: () => canvas.rotateLayer(90) 319 | }), 320 | $el("button.painter-button", { 321 | textContent: "Scale +5%", 322 | onclick: () => canvas.resizeLayer(1.05) 323 | }), 324 | $el("button.painter-button", { 325 | textContent: "Scale -5%", 326 | onclick: () => canvas.resizeLayer(0.95) 327 | }), 328 | $el("button.painter-button", { 329 | textContent: "Layer Up", 330 | onclick: async () => { 331 | canvas.moveLayerUp(); 332 | await canvas.saveToServer(widget.value); 333 | app.graph.runStep(); 334 | } 335 | }), 336 | $el("button.painter-button", { 337 | textContent: "Layer Down", 338 | onclick: async () => { 339 | canvas.moveLayerDown(); 340 | await canvas.saveToServer(widget.value); 341 | app.graph.runStep(); 342 | } 343 | }), 344 | // 添加水平镜像按钮 345 | $el("button.painter-button", { 346 | textContent: "Mirror H", 347 | onclick: () => { 348 | canvas.mirrorHorizontal(); 349 | } 350 | }), 351 | // 添加垂直镜像按钮 352 | $el("button.painter-button", { 353 | textContent: "Mirror V", 354 | onclick: () => { 355 | canvas.mirrorVertical(); 356 | } 357 | }), 358 | // 在控制面板中添加抠图按钮 359 | $el("button.painter-button", { 360 | textContent: "Matting", 361 | onclick: async () => { 362 | try { 363 | if (!canvas.selectedLayer) { 364 | throw new Error("Please select an image first"); 365 | } 366 | 367 | // 获取或创建状态指示器 368 | const statusIndicator = MattingStatusIndicator.getInstance(controlPanel.querySelector('.controls')); 369 | 370 | // 添加状态监听 371 | const updateStatus = (event) => { 372 | const {status} = event.detail; 373 | statusIndicator.setStatus(status); 374 | }; 375 | 376 | api.addEventListener("matting_status", updateStatus); 377 | 378 | try { 379 | // 获取图像据 380 | const imageData = await canvas.getLayerImageData(canvas.selectedLayer); 381 | console.log("Sending image to server..."); 382 | 383 | // 发送请求 384 | const response = await fetch("/matting", { 385 | method: "POST", 386 | headers: { 387 | "Content-Type": "application/json", 388 | }, 389 | body: JSON.stringify({ 390 | image: imageData, 391 | threshold: 0.5, 392 | refinement: 1 393 | }) 394 | }); 395 | 396 | if (!response.ok) { 397 | throw new Error(`Server error: ${response.status}`); 398 | } 399 | 400 | const result = await response.json(); 401 | console.log("Creating new layer with matting result..."); 402 | 403 | // 创建新图层 404 | const mattedImage = new Image(); 405 | mattedImage.onload = async () => { 406 | // 创建临时画布来处理透明度 407 | const tempCanvas = document.createElement('canvas'); 408 | const tempCtx = tempCanvas.getContext('2d'); 409 | tempCanvas.width = canvas.selectedLayer.width; 410 | tempCanvas.height = canvas.selectedLayer.height; 411 | 412 | // 绘制原始图像 413 | tempCtx.drawImage( 414 | mattedImage, 415 | 0, 0, 416 | tempCanvas.width, tempCanvas.height 417 | ); 418 | 419 | // 创建新图层 420 | const newImage = new Image(); 421 | newImage.onload = async () => { 422 | const newLayer = { 423 | image: newImage, 424 | x: canvas.selectedLayer.x, 425 | y: canvas.selectedLayer.y, 426 | width: canvas.selectedLayer.width, 427 | height: canvas.selectedLayer.height, 428 | rotation: canvas.selectedLayer.rotation, 429 | zIndex: canvas.layers.length + 1 430 | }; 431 | 432 | canvas.layers.push(newLayer); 433 | canvas.selectedLayer = newLayer; 434 | canvas.render(); 435 | 436 | // 保存并更新 437 | await canvas.saveToServer(widget.value); 438 | app.graph.runStep(); 439 | }; 440 | 441 | // 转换为PNG并保持透明度 442 | newImage.src = tempCanvas.toDataURL('image/png'); 443 | }; 444 | 445 | mattedImage.src = result.matted_image; 446 | console.log("Matting result applied successfully"); 447 | 448 | } finally { 449 | api.removeEventListener("matting_status", updateStatus); 450 | } 451 | 452 | } catch (error) { 453 | console.error("Matting error:", error); 454 | alert(`Error during matting process: ${error.message}`); 455 | } 456 | } 457 | }) 458 | ]) 459 | ]); 460 | 461 | // 创建ResizeObserver来监控控制面板的高度变化 462 | const resizeObserver = new ResizeObserver((entries) => { 463 | const controlsHeight = entries[0].target.offsetHeight; 464 | canvasContainer.style.top = (controlsHeight + 10) + "px"; 465 | }); 466 | 467 | // 监控控制面板的大小变化 468 | resizeObserver.observe(controlPanel.querySelector('.controls')); 469 | 470 | // 获取触发器widget 471 | const triggerWidget = node.widgets.find(w => w.name === "trigger"); 472 | 473 | // 创建更新函数 474 | const updateOutput = async () => { 475 | // 保存画布 476 | await canvas.saveToServer(widget.value); 477 | // 更新触发器值 478 | triggerWidget.value = (triggerWidget.value + 1) % 99999999; 479 | // 触发节点更新 480 | app.graph.runStep(); 481 | }; 482 | 483 | // 修改所有可能触发更新的操作 484 | const addUpdateToButton = (button) => { 485 | const origClick = button.onclick; 486 | button.onclick = async (...args) => { 487 | await origClick?.(...args); 488 | await updateOutput(); 489 | }; 490 | }; 491 | 492 | // 为所有按钮添加更新逻辑 493 | controlPanel.querySelectorAll('button').forEach(addUpdateToButton); 494 | 495 | // 修改画布容器样式,使用动态top值 496 | const canvasContainer = $el("div.painterCanvasContainer.painter-container", { 497 | style: { 498 | position: "absolute", 499 | top: "60px", // 初始值 500 | left: "10px", 501 | right: "10px", 502 | bottom: "10px", 503 | display: "flex", 504 | justifyContent: "center", 505 | alignItems: "center", 506 | overflow: "hidden" 507 | } 508 | }, [canvas.canvas]); 509 | 510 | // 修改节点大小调整逻辑 511 | node.onResize = function() { 512 | const minSize = 300; 513 | const controlsElement = controlPanel.querySelector('.controls'); 514 | const controlPanelHeight = controlsElement.offsetHeight; // 取实际高 515 | const padding = 20; 516 | 517 | // 保持节点宽度,高度根据画布比例调整 518 | const width = Math.max(this.size[0], minSize); 519 | const height = Math.max( 520 | width * (canvas.height / canvas.width) + controlPanelHeight + padding * 2, 521 | minSize + controlPanelHeight 522 | ); 523 | 524 | this.size[0] = width; 525 | this.size[1] = height; 526 | 527 | // 计算画布的实际可用空间 528 | const availableWidth = width - padding * 2; 529 | const availableHeight = height - controlPanelHeight - padding * 2; 530 | 531 | // 更新画布尺寸,保持比例 532 | const scale = Math.min( 533 | availableWidth / canvas.width, 534 | availableHeight / canvas.height 535 | ); 536 | 537 | canvas.canvas.style.width = (canvas.width * scale) + "px"; 538 | canvas.canvas.style.height = (canvas.height * scale) + "px"; 539 | 540 | // 强制重新渲染 541 | canvas.render(); 542 | }; 543 | 544 | // 添加拖拽事件监听 545 | canvas.canvas.addEventListener('mouseup', updateOutput); 546 | canvas.canvas.addEventListener('mouseleave', updateOutput); 547 | 548 | // 创建一个包含控制面板和画布的容器 549 | const mainContainer = $el("div.painterMainContainer", { 550 | style: { 551 | position: "relative", 552 | width: "100%", 553 | height: "100%" 554 | } 555 | }, [controlPanel, canvasContainer]); 556 | 557 | // 将主容器添加到节点 558 | const mainWidget = node.addDOMWidget("mainContainer", "widget", mainContainer); 559 | 560 | // 设置节点的默认大小 561 | node.size = [500, 500]; // 设置初始大小为正方形 562 | 563 | // 在执行开始时保存数据 564 | api.addEventListener("execution_start", async () => { 565 | // 保存画布 566 | await canvas.saveToServer(widget.value); 567 | 568 | // 保存当前节点的输入数据 569 | if (node.inputs[0].link) { 570 | const linkId = node.inputs[0].link; 571 | const inputData = app.nodeOutputs[linkId]; 572 | if (inputData) { 573 | ImageCache.set(linkId, inputData); 574 | } 575 | } 576 | }); 577 | 578 | // 移除原来在 saveToServer 中的缓存清理 579 | const originalSaveToServer = canvas.saveToServer; 580 | canvas.saveToServer = async function(fileName) { 581 | const result = await originalSaveToServer.call(this, fileName); 582 | // 移除这里的缓存清理 583 | // ImageCache.clear(); 584 | return result; 585 | }; 586 | 587 | return { 588 | canvas: canvas, 589 | panel: controlPanel 590 | }; 591 | } 592 | 593 | // 修改状态指示器类,确保单例模式 594 | class MattingStatusIndicator { 595 | static instance = null; 596 | 597 | static getInstance(container) { 598 | if (!MattingStatusIndicator.instance) { 599 | MattingStatusIndicator.instance = new MattingStatusIndicator(container); 600 | } 601 | return MattingStatusIndicator.instance; 602 | } 603 | 604 | constructor(container) { 605 | this.indicator = document.createElement('div'); 606 | this.indicator.style.cssText = ` 607 | width: 10px; 608 | height: 10px; 609 | border-radius: 50%; 610 | background-color: #808080; 611 | margin-left: 10px; 612 | display: inline-block; 613 | transition: background-color 0.3s; 614 | `; 615 | 616 | const style = document.createElement('style'); 617 | style.textContent = ` 618 | .processing { 619 | background-color: #2196F3; 620 | animation: blink 1s infinite; 621 | } 622 | .completed { 623 | background-color: #4CAF50; 624 | } 625 | .error { 626 | background-color: #f44336; 627 | } 628 | @keyframes blink { 629 | 0% { opacity: 1; } 630 | 50% { opacity: 0.4; } 631 | 100% { opacity: 1; } 632 | } 633 | `; 634 | document.head.appendChild(style); 635 | 636 | container.appendChild(this.indicator); 637 | } 638 | 639 | setStatus(status) { 640 | this.indicator.className = ''; // 清除所有状态 641 | if (status) { 642 | this.indicator.classList.add(status); 643 | } 644 | if (status === 'completed') { 645 | setTimeout(() => { 646 | this.indicator.classList.remove('completed'); 647 | }, 2000); 648 | } 649 | } 650 | } 651 | 652 | // 验证 ComfyUI 的图像数据格式 653 | function validateImageData(data) { 654 | // 打印完整的输入数据结构 655 | console.log("Validating data structure:", { 656 | hasData: !!data, 657 | type: typeof data, 658 | isArray: Array.isArray(data), 659 | keys: data ? Object.keys(data) : null, 660 | shape: data?.shape, 661 | dataType: data?.data ? data.data.constructor.name : null, 662 | fullData: data // 打印完整数据 663 | }); 664 | 665 | // 检查是否为空 666 | if (!data) { 667 | console.log("Data is null or undefined"); 668 | return false; 669 | } 670 | 671 | // 如果是数组,获取第一个元素 672 | if (Array.isArray(data)) { 673 | console.log("Data is array, getting first element"); 674 | data = data[0]; 675 | } 676 | 677 | // 检查数据结构 678 | if (!data || typeof data !== 'object') { 679 | console.log("Invalid data type"); 680 | return false; 681 | } 682 | 683 | // 检查是否有数据属性 684 | if (!data.data) { 685 | console.log("Missing data property"); 686 | return false; 687 | } 688 | 689 | // 检查数据类型 690 | if (!(data.data instanceof Float32Array)) { 691 | // 如果不是 Float32Array,尝试转换 692 | try { 693 | data.data = new Float32Array(data.data); 694 | } catch (e) { 695 | console.log("Failed to convert data to Float32Array:", e); 696 | return false; 697 | } 698 | } 699 | 700 | return true; 701 | } 702 | 703 | // 转换 ComfyUI 图像数据为画布可用格式 704 | function convertImageData(data) { 705 | console.log("Converting image data:", data); 706 | 707 | // 如果是数组,获取第一个元素 708 | if (Array.isArray(data)) { 709 | data = data[0]; 710 | } 711 | 712 | // 获取维度信息 [batch, height, width, channels] 713 | const shape = data.shape; 714 | const height = shape[1]; // 1393 715 | const width = shape[2]; // 1393 716 | const channels = shape[3]; // 3 717 | const floatData = new Float32Array(data.data); 718 | 719 | console.log("Processing dimensions:", { height, width, channels }); 720 | 721 | // 创建画布格式的数据 (RGBA) 722 | const rgbaData = new Uint8ClampedArray(width * height * 4); 723 | 724 | // 转换数据格式 [batch, height, width, channels] -> RGBA 725 | for (let h = 0; h < height; h++) { 726 | for (let w = 0; w < width; w++) { 727 | const pixelIndex = (h * width + w) * 4; 728 | const tensorIndex = (h * width + w) * channels; 729 | 730 | // 复制 RGB 通道并转换值范围 (0-1 -> 0-255) 731 | for (let c = 0; c < channels; c++) { 732 | const value = floatData[tensorIndex + c]; 733 | rgbaData[pixelIndex + c] = Math.max(0, Math.min(255, Math.round(value * 255))); 734 | } 735 | 736 | // 设置 alpha 通道为完全不透明 737 | rgbaData[pixelIndex + 3] = 255; 738 | } 739 | } 740 | 741 | // 返回画布可用的格式 742 | return { 743 | data: rgbaData, // Uint8ClampedArray 格式的 RGBA 数据 744 | width: width, // 图像宽度 745 | height: height // 图像高度 746 | }; 747 | } 748 | 749 | // 处理遮罩数据 750 | function applyMaskToImageData(imageData, maskData) { 751 | console.log("Applying mask to image data"); 752 | 753 | const rgbaData = new Uint8ClampedArray(imageData.data); 754 | const width = imageData.width; 755 | const height = imageData.height; 756 | 757 | // 获取遮罩数据 [batch, height, width] 758 | const maskShape = maskData.shape; 759 | const maskFloatData = new Float32Array(maskData.data); 760 | 761 | console.log(`Applying mask of shape: ${maskShape}`); 762 | 763 | // 将遮罩数据应用到 alpha 通道 764 | for (let h = 0; h < height; h++) { 765 | for (let w = 0; w < width; w++) { 766 | const pixelIndex = (h * width + w) * 4; 767 | const maskIndex = h * width + w; 768 | // 使遮罩值作为 alpha 值,转换值范围从 0-1 到 0-255 769 | const alpha = maskFloatData[maskIndex]; 770 | rgbaData[pixelIndex + 3] = Math.max(0, Math.min(255, Math.round(alpha * 255))); 771 | } 772 | } 773 | 774 | console.log("Mask application completed"); 775 | 776 | return { 777 | data: rgbaData, 778 | width: width, 779 | height: height 780 | }; 781 | } 782 | 783 | // 修改缓存管理 784 | const ImageCache = { 785 | cache: new Map(), 786 | 787 | // 存储图像数据 788 | set(key, imageData) { 789 | console.log("Caching image data for key:", key); 790 | this.cache.set(key, imageData); 791 | }, 792 | 793 | // 获取图像数据 794 | get(key) { 795 | const data = this.cache.get(key); 796 | console.log("Retrieved cached data for key:", key, !!data); 797 | return data; 798 | }, 799 | 800 | // 检查是否存在 801 | has(key) { 802 | return this.cache.has(key); 803 | }, 804 | 805 | // 清除缓存 806 | clear() { 807 | console.log("Clearing image cache"); 808 | this.cache.clear(); 809 | } 810 | }; 811 | 812 | // 改进数据准备函数 813 | function prepareImageForCanvas(inputImage) { 814 | console.log("Preparing image for canvas:", inputImage); 815 | 816 | try { 817 | // 如果是数组,获取第一个元素 818 | if (Array.isArray(inputImage)) { 819 | inputImage = inputImage[0]; 820 | } 821 | 822 | if (!inputImage || !inputImage.shape || !inputImage.data) { 823 | throw new Error("Invalid input image format"); 824 | } 825 | 826 | // 获取维度信息 [batch, height, width, channels] 827 | const shape = inputImage.shape; 828 | const height = shape[1]; 829 | const width = shape[2]; 830 | const channels = shape[3]; 831 | const floatData = new Float32Array(inputImage.data); 832 | 833 | console.log("Image dimensions:", { height, width, channels }); 834 | 835 | // 创建 RGBA 格式数据 836 | const rgbaData = new Uint8ClampedArray(width * height * 4); 837 | 838 | // 转换数据格式 [batch, height, width, channels] -> RGBA 839 | for (let h = 0; h < height; h++) { 840 | for (let w = 0; w < width; w++) { 841 | const pixelIndex = (h * width + w) * 4; 842 | const tensorIndex = (h * width + w) * channels; 843 | 844 | // 转换 RGB 通道 (0-1 -> 0-255) 845 | for (let c = 0; c < channels; c++) { 846 | const value = floatData[tensorIndex + c]; 847 | rgbaData[pixelIndex + c] = Math.max(0, Math.min(255, Math.round(value * 255))); 848 | } 849 | 850 | // 设置 alpha 通道 851 | rgbaData[pixelIndex + 3] = 255; 852 | } 853 | } 854 | 855 | // 返回画布需要的格式 856 | return { 857 | data: rgbaData, 858 | width: width, 859 | height: height 860 | }; 861 | } catch (error) { 862 | console.error("Error preparing image:", error); 863 | throw new Error(`Failed to prepare image: ${error.message}`); 864 | } 865 | } 866 | 867 | app.registerExtension({ 868 | name: "Comfy.CanvasNode", 869 | async beforeRegisterNodeDef(nodeType, nodeData, app) { 870 | if (nodeType.comfyClass === "CanvasNode") { 871 | const onNodeCreated = nodeType.prototype.onNodeCreated; 872 | nodeType.prototype.onNodeCreated = async function() { 873 | const r = onNodeCreated?.apply(this, arguments); 874 | 875 | const widget = this.widgets.find(w => w.name === "canvas_image"); 876 | await createCanvasWidget(this, widget, app); 877 | 878 | return r; 879 | }; 880 | } 881 | } 882 | }); 883 | 884 | async function handleImportInput(data) { 885 | if (data && data.image) { 886 | const imageData = data.image; 887 | await importImage(imageData); 888 | } 889 | } -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [project] 2 | name = "comfyui-ycanvas" 3 | description = "" 4 | version = "1.0.0" 5 | license = {file = "LICENSE"} 6 | dependencies = ["torch", "torchvision", "transformers", "aiohttp", "numpy", "tqdm", "Pillow"] 7 | 8 | [project.urls] 9 | Repository = "https://github.com/yichengup/Comfyui-Ycanvas" 10 | # Used by Comfy Registry https://comfyregistry.org 11 | 12 | [tool.comfy] 13 | PublisherId = "" 14 | DisplayName = "Comfyui-Ycanvas" 15 | Icon = "" 16 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | torch 2 | torchvision 3 | transformers 4 | aiohttp 5 | numpy 6 | tqdm 7 | Pillow 8 | --------------------------------------------------------------------------------