├── .gitignore ├── .gitmodules ├── LICENSE ├── README.md ├── __init__.py ├── docs ├── 1细节优化.png ├── 1细节优化2.png ├── 2固定提示词大师.png ├── 2提示词模板管理.png ├── 2文本拼接.png ├── 2自定义随机提示词大师.png ├── 3水印大师.png ├── 4图像尺寸获取.png ├── 4图像指定保存路径.png ├── 4图像镜像翻转.png ├── 4滤镜.png ├── 4颜色迁移.png ├── 5遮罩处理.png ├── 5遮罩处理2.png ├── 5遮罩羽化.png ├── 6百度翻译API.png ├── LOGO1.png ├── LOGO2.png ├── workflow.png ├── 修复前原图.png ├── 修复后.png ├── 修复后2.png ├── 内补.png ├── 内补前.png ├── 内补后.png ├── 加载任意图像.png ├── 加载任意图像2.png ├── 局部修复前.png └── 局部修复后.png ├── fonts └── 优设标题黑.ttf ├── js └── wbksh.js ├── json ├── egtscglds.json └── options.json ├── nodes ├── EGJBCHBMQ.py ├── EGJDFDHT.py ├── EGLATENTBISC.py ├── EGSZHZ.py ├── EGSZJDYS.py ├── EGTXSFBLS.py ├── EGWBZYSRK.py ├── EGZZJDYHHT.py ├── EGZZTXHZ.py ├── egbdfy.py ├── egcchq.py ├── egcgysqy.py ├── egcjpjnode.py ├── egjfzz.py ├── egjxfz.py ├── egryqh.py ├── egszqh.py ├── egtjtxsy.py ├── egtscdscjnode.py ├── egtscdsdgnode.py ├── egtscdsfgnode.py ├── egtscdsjtnode.py ├── egtscdsqtnode.py ├── egtscdsrwnode.py ├── egtscdssjdiy.py ├── egtscdswpnode.py ├── egtscdszlnode.py ├── egtscmb.py ├── egtxcglj.py ├── egtxljbc.py ├── egtxwhlj.py ├── egtxystz.py ├── egtxzdljjz.py ├── egwbksh.py ├── egwbpj.py ├── egwbsjpj.py ├── egwzsytj.py ├── egysbhd.py ├── egysblld.py ├── egysqyld.py ├── egyssxqy.py ├── egzzbsyh.py ├── egzzcjnode.py ├── egzzcjpj.py ├── egzzhsyh.py ├── egzzhtkz.py ├── egzzkzyh.py └── egzzmhnode.py └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | download_cache 3 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/.gitmodules -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | # My Project 2 | 3 | This project is a derivative work based on several open source projects: 4 | 5 | 1. rgthree-comfy by rgthree (https://github.com/rgthree/rgthree-comfy) 6 | 2. ComfyUI-Custom-Scripts by pythongosssss (https://github.com/pythongosssss/ComfyUI-Custom-Scripts) 7 | 3. ComfyUI-Mixlab-Nodes by shadowcz007 (https://github.com/shadowcz007/comfyui-mixlab-nodes) 8 | 9 | Copyright (c) 2023 11dog(二狗子) 10 | 11 | Permission is hereby granted, free of charge, to any person obtaining a copy 12 | of this software and associated documentation files (the "Software"), to deal 13 | in the Software without restriction, including without limitation the rights 14 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 15 | copies of the Software, and to permit persons to whom the Software is 16 | furnished to do so, subject to the following conditions: 17 | 18 | The above copyright notice and this permission notice shall be included in all 19 | copies or substantial portions of the Software. 20 | 21 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 22 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 23 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 24 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 25 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 26 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 27 | SOFTWARE. 28 | 29 | This project is also subject to the terms of the MIT license. See the LICENSE file for more details. 30 | 31 | Additional restrictions: 32 | 33 | - This project cannot be used for commercial purposes. Any use of the software, including but not limited to the sale, rental, or distribution of the software or derivative works, is strictly prohibited without the author's express written consent. 34 | 35 | - The software includes self-written code parts that cannot be sold, rented, or distributed separately or as part of derivative works without the author's express written consent. 36 | 37 | 38 | 这个项目是基于几个开源项目的衍生作品: 39 | 40 | 1.rgthree由rgthree舒适(https://github.com/rgthree/rgthree-comfy) 41 | 42 | 2.CompyUI自定义脚本by pythongossss(https://github.com/pythongosssss/ComfyUI-Custom-Scripts) 43 | 44 | 3.shadowcz007的ComfyUI Mixrab节点(https://github.com/shadowcz007/comfyui-mixlab-nodes) 45 | 46 | 版权所有(c)2023 11dog(二狗子) 47 | 48 | 特此免费向任何获得副本的人授予许可 49 | 50 | 该软件和相关文档文件(“软件”),以处理在软件中不受限制,包括但不限于权利使用、复制、修改、合并、发布、分发、分许可和/或销售软件的副本,并允许软件的使用者 51 | 52 | 按照以下条件提供: 53 | 54 | 上述版权声明和本许可声明应包含在软件的副本或实质部分。 55 | 56 | 软件是“按原样”提供的,没有任何形式的担保,明示或隐含的,包括但不限于适销性保证, 57 | 58 | 适用于特定目的和不侵权。在任何情况下作者或版权持有人应对任何索赔、损害或其他责任,无论是在合同诉讼、侵权行为还是其他方面,出于或与软件有关,或使用或其他交易软件。 59 | 60 | 该项目还受麻省理工学院许可证条款的约束。有关更多详细信息,请参阅许可证文件。 61 | 62 | 其他限制: 63 | 64 | -该项目不能用于商业目的。您不能以营利为目的出售、出租或分发软件或衍生作品。 65 | 66 | -该项目可以用于其他开源项目,但必须保留本声明和投稿作者的原始版权声明。 67 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![灵仙儿和二狗子](docs/LOGO2.png "LOGO2") 2 | 哈喽!我是二狗子(2🐕)!这是一套comfyui的多功能自定义节点套件,涵盖了提示词管理,水印添加,图像细化,指定保存图像路径,常规文本、图像处理等40+节点 3 | Hello! I am Er Gouzi (2🐕)!This is a multifunctional custom node kit from Comfyui, covering 40+ nodes including prompt word management, watermark addition, image refinement, specified image saving path, regular text and image processing, etc 4 | 5 | ## 安装 6 | Installation 7 | 8 | 首先,打开命令行终端,然后切换到您的ComfyUI的`custom_nodes`目录: 9 | Firstly, open the command line terminal and then switch to the 'custom_dodes' directory in your ComfyUI: 10 | 11 | ```cd /path/to/your/ComfyUI/custom_nodes``` 12 | 13 | 将/path/to/your/ComfyUI替换为您的ComfyUI项目所在的实际路径。 14 | Replace/path/to/your/ComfyUI with the actual path where your ComfyUI project is located. 15 | 接下来,克隆ergouzi-DGNJD仓库: 16 | Next, clone the ergouzi DGNJD repository: 17 | 18 | ```git clone https://github.com/11dogzi/Comfyui-ergouzi-Nodes.git``` 19 | 20 | ## 节点介绍 21 | Node Introduction 22 | 如果你需要中文版(这个库的插件功能更加齐全)本库不再更新!可以到[二狗子的节点组中文版](https://github.com/11dogzi/Comfyui-ergouzi-DGNJD) 23 | ## 提示词大师: 24 | Hint Word Master: 25 | 众多可选类型提示词节点,可随机 26 | Numerous optional types of prompt word nodes that can be randomly selected 27 | ![提示词大师](docs/2固定提示词大师.png "2固定提示词大师") 28 | 自定义类型随机提示词节点,可根据需求选择类型,然后随机 29 | Custom type random prompt word node, you can choose the type according to your needs, and then randomly 30 | ![提示词大师](docs/2自定义随机提示词大师.png "2自定义随机提示词大师") 31 | 提示词模板管理器,可快捷删除保存修改提示词模板 32 | Prompt word template manager, which can quickly delete, save, and modify prompt word templates 33 | ![提示词大师](docs/2提示词模板管理.png "2提示词模板管理") 34 | 文本自由拼接节点,配合提示词模板使用更加自由的使用提示词 35 | Free text splicing nodes, combined with prompt word templates for more flexible use of prompt words 36 | ![提示词大师](docs/2文本拼接.png "2文本拼接") 37 | 可保存下列图像以加载工作流 38 | The following images can be saved to load the workflow 39 | ![提示词大师](docs/workflow.png "提示词大师工作流") 40 | 41 | 42 | ## 细化处理节点: 43 | Refine processing nodes: 44 | 更自由的局部处理方式,可对遮罩区域进行裁剪,自动识别裁剪区域,通过其它节点处理拼接回原图,配合语义分割等效果更佳! 45 | A more flexible local processing method that can crop the masked area, automatically recognize the cropped area, and process it back to the original image through other nodes, with better results such as semantic segmentation! 46 | 以下是两个使用案例 47 | Here are two use cases 48 | 局部修复 49 | Local repair 50 | 通过涂抹需修复区域完成任意局部修复 51 | Complete any local repair by applying the area to be repaired 52 | ![细节修复](docs/1细节优化.png "1细节优化") 53 | 54 | ![细节修复](docs/修复前原图.png "修复前原图") ![细节修复](docs/修复后.png "修复后") 55 | ![细节修复](docs/局部修复前.png "局部修复前") ![细节修复](docs/局部修复后.png "局部修复后") 56 | 57 | 可保存下列图像以加载工作流 58 | The following images can be saved to load the workflow 59 | ![细节修复](docs/修复后.png "修复后") 60 | 61 | 内补绘制 62 | Internal supplement drawing 63 | 配合控制网等插件完成局部绘制 64 | Collaborate with control network and other plugins to complete local drawing 65 | ![细节修复](docs/1细节优化2.png "1细节优化2") 66 | 67 | ![细节修复](docs/内补前.png "内补前") ![细节修复](docs/修复后2.png "修复后2") 68 | ![细节修复](docs/内补.png "内补") ![细节修复](docs/内补后.png "内补后") 69 | 70 | 可保存下列图像打开工作流 71 | The following images can be saved to open the workflow 72 | ![细节修复](docs/修复后2.png "修复后2") 73 | 74 | ## 水印大师: 75 | Watermark Master: 76 | 无论是生成文字水印,还是上传成品水印,通通可以实现,配合批量加载图像可以批量添加! 77 | Whether it's generating text watermarks or uploading finished product watermarks, it can all be achieved, and with batch loading of images, it can be added in batches! 78 | ![水印大师](docs/3水印大师.png "3水印大师") 79 | 80 | ## 常规图像处理节点: 81 | Conventional image processing nodes: 82 | 现在我们可以指定图像的保存路径了! 83 | Specify image save path: 84 | ![常规图像](docs/4图像指定保存路径.png "4图像指定保存路径") 85 | 加载任意图像!文件或者文件夹!包括psd,而且可以实时更新! 86 | Load any image! File or folder! Including PSD, and can be updated in real-time! 87 | ![加载任意图像](docs/加载任意图像.png "加载任意图像") 88 | ![加载任意图像](docs/加载任意图像2.png "加载任意图像2") 89 | 滤镜调色,批量或单选 90 | Filter color adjustment, batch or single selection 91 | ![滤镜](docs/4滤镜.png "4滤镜") 92 | 颜色迁移 93 | Color migration 94 | ![常规图像](docs/4颜色迁移.png "4颜色迁移") 95 | 图像尺寸获取 96 | Image size acquisition 97 | ![常规图像](docs/4图像尺寸获取.png "4图像尺寸获取") 98 | 镜像翻转 99 | Mirror Flip 100 | ![常规图像](docs/4图像镜像翻转.png "4图像镜像翻转") 101 | 102 | ## 常规遮罩处理节点: 103 | Regular mask processing nodes: 104 | ![常规遮罩](docs/5遮罩处理.png "5遮罩处理") 105 | ![常规遮罩](docs/5遮罩处理2.png "5遮罩处理2") 106 | ![常规遮罩](docs/5遮罩羽化.png "5遮罩羽化") 107 | 108 | ## 百度翻译API: 109 | Baidu Translation API: 110 | 仅第一次使用需要输入id和key 111 | Only the first use requires entering the ID and key 112 | 申请百度翻译API,请访问:[百度翻译API申请链接](https://fanyi-api.baidu.com/?aldtype=16047&ext_channel=Aldtype&fr=pcHeader) 113 | 114 | ![百度翻译API](docs/6百度翻译API.png "6百度翻译API") 115 | 116 | 117 | ## 更多SD免费教程 118 | More SD free tutorials 119 | 灵仙儿和二狗子的Bilibili空间,欢迎访问: 120 | Bilibili space for Lingxian'er and Ergouzi, welcome to visit: 121 | [灵仙儿二狗子的Bilibili空间](https://space.bilibili.com/19723588?spm_id_from=333.1007.0.0) 122 | 欢迎加入我们的QQ频道,点击这里进入: 123 | Welcome to our QQ channel, click here to enter: 124 | [二狗子的QQ频道](https://pd.qq.com/s/3d9ys5wpr) 125 | ![LOGO](docs/LOGO1.png "LOGO1")![LOGO](docs/LOGO1.png "LOGO1")![LOGO](docs/LOGO1.png "LOGO1") 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- 1 | from .nodes.egbdfy import EGBDAPINode 2 | from .nodes.egcchq import EGTXCCHQ 3 | from .nodes.egcgysqy import EGSCQYQBQYNode 4 | from .nodes.egcjpjnode import EGCJPJNode 5 | from .nodes.egjfzz import EGJFZZSC 6 | from .nodes.egjxfz import EGJXFZNODE 7 | from .nodes.egryqh import EGRYQHNode 8 | from .nodes.egszqh import EGXZQHNode 9 | from .nodes.egtjtxsy import EGCPSYTJNode 10 | from .nodes.egtscdscjnode import EGTSCDSCJLNode 11 | from .nodes.egtscdsdgnode import EGTSCDSDGLNode 12 | from .nodes.egtscdsfgnode import EGTSCDSFGLNode 13 | from .nodes.egtscdsjtnode import EGTSCDSJTLNode 14 | from .nodes.egtscdsqtnode import EGTSCDSQTLNode 15 | from .nodes.egtscdsrwnode import EGTSCDSRWLNode 16 | from .nodes.egtscdssjdiy import EGSJNode 17 | from .nodes.egtscdswpnode import EGTSCDSWPLNode 18 | from .nodes.egtscdszlnode import EGTSCDSZLLNode 19 | from .nodes.egtscmb import EGTSCMBGLNode 20 | from .nodes.egtxljbc import EGTXBCLJBCNode 21 | from .nodes.egwbpj import EGWBRYPJ 22 | from .nodes.egwbsjpj import EGWBSJPJ 23 | from .nodes.egysbhd import EGSCQYBHDQYYNode 24 | from .nodes.egysblld import EGYSQYBLLDNode 25 | from .nodes.egysqyld import EGYSQYBBLLDNode 26 | from .nodes.egyssxqy import EGSCQSXQYNode 27 | from .nodes.egzzbsyh import EGZZBSYH 28 | from .nodes.egzzcjnode import EGTXZZCJNode 29 | from .nodes.egzzhsyh import EGZZHSYH 30 | from .nodes.egzzhtkz import EGZZKZHTNODE 31 | from .nodes.egzzkzyh import EGZZSSKZNODE 32 | from .nodes.egzzmhnode import EGZZBYYHNode 33 | from .nodes.egwzsytj import EGYSZTNode 34 | from .nodes.egwbksh import EGWBKSH 35 | from .nodes.egtxzdljjz import EGJZRYTX 36 | from .nodes.egtxcglj import EGTXLJNode 37 | from .nodes.egtxystz import EGHTYSTZNode 38 | from .nodes.egtxwhlj import EGWHLJ 39 | from .nodes.egzzcjpj import EGZZHBCJNode 40 | from .nodes.EGJDFDHT import EGRYHT 41 | from .nodes.EGSZJDYS import EGSZJDYS 42 | from .nodes.EGSZHZ import EG_SS_RYZH 43 | from .nodes.EGWBZYSRK import EGZYWBKNode 44 | from .nodes.EGZZTXHZ import EGTXZZZHNode 45 | from .nodes.EGJBCHBMQ import EGJBCH 46 | from .nodes.EGTXSFBLS import EGTXSFBLSNode 47 | from .nodes.EGLATENTBISC import EGKLATENT 48 | from .nodes.EGZZJDYHHT import EGZZMHHT 49 | 50 | NODE_CLASS_MAPPINGS = { 51 | "EG_FX_BDAPI": EGBDAPINode, 52 | "EG_TX_CCHQ": EGTXCCHQ, 53 | "EG_SCQY_QBQY": EGSCQYQBQYNode, 54 | "EG_TX_CJPJ": EGCJPJNode, 55 | "EG_JF_ZZSC": EGJFZZSC, 56 | "EG_JXFZ_node": EGJXFZNODE, 57 | "EG_WXZ_QH": EGRYQHNode, 58 | "EG_XZ_QH": EGXZQHNode, 59 | "EG_CPSYTJ": EGCPSYTJNode, 60 | "EG_TSCDS_CJ": EGTSCDSCJLNode, 61 | "EG_TSCDS_DG": EGTSCDSDGLNode, 62 | "EG_TSCDS_FG": EGTSCDSFGLNode, 63 | "EG_TSCDS_JT": EGTSCDSJTLNode, 64 | "EG_TSCDS_QT": EGTSCDSQTLNode, 65 | "EG_TSCDS_RW": EGTSCDSRWLNode, 66 | "EG_SJ" : EGSJNode, 67 | "EG_TSCDS_WP": EGTSCDSWPLNode, 68 | "EG_TSCDS_ZL": EGTSCDSZLLNode, 69 | "EG_TSCMB_GL": EGTSCMBGLNode, 70 | "EG_TX_LJBC": EGTXBCLJBCNode, 71 | "EG_TC_Node": EGWBRYPJ, 72 | "EG_SJPJ_Node" : EGWBSJPJ, 73 | "EG_SCQY_BHDQY": EGSCQYBHDQYYNode, 74 | "EG_YSQY_BLLD": EGYSQYBLLDNode, 75 | "EG_YSQY_BBLLD": EGYSQYBBLLDNode, 76 | "EG_SCQY_SXQY": EGSCQSXQYNode, 77 | "EG_ZZ_BSYH": EGZZBSYH, 78 | "ER_TX_ZZCJ": EGTXZZCJNode, 79 | "EG_ZZ_HSYH": EGZZHSYH, 80 | "EG_ZZKZ_HT_node": EGZZKZHTNODE, 81 | "EG_ZZ_SSKZ": EGZZSSKZNODE, 82 | "EG_ZZ_BYYH": EGZZBYYHNode, 83 | "EG-YSZT-ZT" : EGYSZTNode, 84 | "EG_WB_KSH": EGWBKSH, 85 | "EG_TX_JZRY" : EGJZRYTX, 86 | "EG_TX_LJ" : EGTXLJNode, 87 | "EG_HT_YSTZ" : EGHTYSTZNode, 88 | "EG_TX_WHLJ" : EGWHLJ, 89 | "EG_ZZHBCJ" : EGZZHBCJNode, 90 | "EG_RY_HT" : EGRYHT, 91 | "EG_SZ_JDYS" : EGSZJDYS, 92 | "EG_SS_RYZH" : EG_SS_RYZH, 93 | "EG_ZY_WBK" : EGZYWBKNode, 94 | "EG_TXZZ_ZH" : EGTXZZZHNode, 95 | "ER_JBCH": EGJBCH, 96 | "EG_TX_SFBLS" : EGTXSFBLSNode, 97 | "EG_K_LATENT" : EGKLATENT, 98 | "EG_ZZ_MHHT" : EGZZMHHT, 99 | } 100 | 101 | NODE_DISPLAY_NAME_MAPPINGS = { 102 | "EG_FX_BDAPI" : "2🐕Baidu Translation API", 103 | "EG_TX_CCHQ" : "2🐕Image size acquisition", 104 | "EG_SCQY_QBQY" : "2🐕Regular color migration", 105 | "EG_TX_CJPJ" : "2🐕Image cropping data stitching", 106 | "EG_JF_ZZSC" : "2🐕Seam Mask Generator", 107 | "EG_JXFZ_node" : "2🐕Image Mirror Flip", 108 | "EG_WXZ_QH" : "2🐕Unrestricted switching", 109 | "EG_XZ_QH" : "2🐕Choice Switch", 110 | "EG_CPSYTJ" : "2🐕Add finished watermark image", 111 | "EG_TSCDS_CJ" : "2🐕Scene class", 112 | "EG_TSCDS_DG" : "2🐕Lighting Class", 113 | "EG_TSCDS_FG" : "2🐕Style category", 114 | "EG_TSCDS_JT" : "2🐕Lens class", 115 | "EG_TSCDS_QT" : "2🐕Other categories", 116 | "EG_TSCDS_RW" : "2🐕Character category", 117 | "EG_SJ" : "2🐕Random prompt", 118 | "EG_TSCDS_WP" : "2🐕Item category", 119 | "EG_TSCDS_ZL" : "2🐕Quality category", 120 | "EG_TSCMB_GL" : "2🐕Custom template", 121 | "EG_TX_LJBC" : "2🐕Specify image save path", 122 | "EG_TC_Node" : "2🐕Text arbitrary splicing", 123 | "EG_SJPJ_Node" : "2🐕Text random splicing", 124 | "EG_SCQY_BHDQY" : "2🐕Saturation migration", 125 | "EG_YSQY_BLLD" : "2🐕Preserve brightness", 126 | "EG_YSQY_BBLLD" : "2🐕Do not retain brightness", 127 | "EG_SCQY_SXQY" : "2🐕Hue migration", 128 | "EG_ZZ_BSYH" : "2🐕Mask Blurred white edges", 129 | "ER_TX_ZZCJ" : "2🐕Cropping image mask areas", 130 | "EG_ZZ_HSYH" : "2🐕Mask Blurred Black edges", 131 | "EG_ZZKZ_HT_node" : "2🐕Mask slider extension", 132 | "EG_ZZ_SSKZ" : "2🐕Mask Expansion", 133 | "EG_ZZ_BYYH" : "2🐕Mask edges blurred", 134 | "EG-YSZT-ZT" : "2🐕Text watermark addition", 135 | "EG_WB_KSH": "2🐕View Text", 136 | "EG_TX_JZRY" : "2🐕Load any image", 137 | "EG_TX_LJ" : "2🐕Conventional filters", 138 | "EG_HT_YSTZ" : "2🐕Color adjustment", 139 | "EG_TX_WHLJ" : "2🐕Internet celebrity filter", 140 | "EG_ZZHBCJ" : "2🐕Mask can be cut arbitrarily", 141 | "EG_RY_HT" : "2🐕Simple slider", 142 | "EG_SZ_JDYS" : "2🐕+-x÷", 143 | "EG_SS_RYZH" : "2🐕Int Float Text Swap", 144 | "EG_ZY_WBK" : "2🐕Free input box", 145 | "EG_TXZZ_ZH" : "2🐕Mask image exchange", 146 | "ER_JBCH": "2🐕Redraw encoder", 147 | "EG_TX_SFBLS" : "2🐕Image scaling lock", 148 | "EG_K_LATENT" : "2🐕Proportional empty Latent", 149 | "EG_ZZ_MHHT" : "2🐕Fuzzy fast intensity", 150 | } 151 | -------------------------------------------------------------------------------- /docs/1细节优化.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/1细节优化.png -------------------------------------------------------------------------------- /docs/1细节优化2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/1细节优化2.png -------------------------------------------------------------------------------- /docs/2固定提示词大师.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/2固定提示词大师.png -------------------------------------------------------------------------------- /docs/2提示词模板管理.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/2提示词模板管理.png -------------------------------------------------------------------------------- /docs/2文本拼接.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/2文本拼接.png -------------------------------------------------------------------------------- /docs/2自定义随机提示词大师.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/2自定义随机提示词大师.png -------------------------------------------------------------------------------- /docs/3水印大师.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/3水印大师.png -------------------------------------------------------------------------------- /docs/4图像尺寸获取.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/4图像尺寸获取.png -------------------------------------------------------------------------------- /docs/4图像指定保存路径.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/4图像指定保存路径.png -------------------------------------------------------------------------------- /docs/4图像镜像翻转.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/4图像镜像翻转.png -------------------------------------------------------------------------------- /docs/4滤镜.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/4滤镜.png -------------------------------------------------------------------------------- /docs/4颜色迁移.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/4颜色迁移.png -------------------------------------------------------------------------------- /docs/5遮罩处理.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/5遮罩处理.png -------------------------------------------------------------------------------- /docs/5遮罩处理2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/5遮罩处理2.png -------------------------------------------------------------------------------- /docs/5遮罩羽化.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/5遮罩羽化.png -------------------------------------------------------------------------------- /docs/6百度翻译API.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/6百度翻译API.png -------------------------------------------------------------------------------- /docs/LOGO1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/LOGO1.png -------------------------------------------------------------------------------- /docs/LOGO2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/LOGO2.png -------------------------------------------------------------------------------- /docs/workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/workflow.png -------------------------------------------------------------------------------- /docs/修复前原图.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/修复前原图.png -------------------------------------------------------------------------------- /docs/修复后.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/修复后.png -------------------------------------------------------------------------------- /docs/修复后2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/修复后2.png -------------------------------------------------------------------------------- /docs/内补.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/内补.png -------------------------------------------------------------------------------- /docs/内补前.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/内补前.png -------------------------------------------------------------------------------- /docs/内补后.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/内补后.png -------------------------------------------------------------------------------- /docs/加载任意图像.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/加载任意图像.png -------------------------------------------------------------------------------- /docs/加载任意图像2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/加载任意图像2.png -------------------------------------------------------------------------------- /docs/局部修复前.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/局部修复前.png -------------------------------------------------------------------------------- /docs/局部修复后.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/docs/局部修复后.png -------------------------------------------------------------------------------- /fonts/优设标题黑.ttf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/11dogzi/Comfyui-ergouzi-Nodes/0d6ac29773fa03e439dd9deb282453b739403427/fonts/优设标题黑.ttf -------------------------------------------------------------------------------- /js/wbksh.js: -------------------------------------------------------------------------------- 1 | 2 | import { app } from "/scripts/app.js"; 3 | import { ComfyWidgets } from "/scripts/widgets.js"; 4 | 5 | app.registerExtension({ 6 | name: "Comfy.wbksh", 7 | async beforeRegisterNodeDef(nodeType, nodeData, app) { 8 | // --- EG_WB_KSH Node 9 | if (nodeData.name === "EG_WB_KSH") { 10 | // Node Created 11 | const onNodeCreated = nodeType.prototype.onNodeCreated; 12 | nodeType.prototype.onNodeCreated = function () { 13 | const ret = onNodeCreated 14 | ? onNodeCreated.apply(this, arguments) 15 | : undefined; 16 | 17 | let EG_WB_KSH = app.graph._nodes.filter( 18 | (wi) => wi.type == nodeData.name 19 | ), 20 | nodeName = `${nodeData.name}_${EG_WB_KSH.length}`; 21 | 22 | console.log(`Create ${nodeData.name}: ${nodeName}`); 23 | 24 | const wi = ComfyWidgets.STRING( 25 | this, 26 | nodeName, 27 | [ 28 | "STRING", 29 | { 30 | default: "", 31 | placeholder: "Text output...", 32 | multiline: true, 33 | }, 34 | ], 35 | app 36 | ); 37 | wi.widget.inputEl.readOnly = true; 38 | return ret; 39 | }; 40 | // Function set value 41 | const outSet = function (texts) { 42 | if (texts.length > 0) { 43 | let widget_id = this?.widgets.findIndex( 44 | (w) => w.type == "customtext" 45 | ); 46 | 47 | if (Array.isArray(texts)) 48 | texts = texts 49 | .filter((word) => word.trim() !== "") 50 | .map((word) => word.trim()) 51 | .join(" "); 52 | 53 | this.widgets[widget_id].value = texts; 54 | app.graph.setDirtyCanvas(true); 55 | } 56 | }; 57 | 58 | // onExecuted 59 | const onExecuted = nodeType.prototype.onExecuted; 60 | nodeType.prototype.onExecuted = function (texts) { 61 | onExecuted?.apply(this, arguments); 62 | outSet.call(this, texts?.string); 63 | }; 64 | // onConfigure 65 | const onConfigure = nodeType.prototype.onConfigure; 66 | nodeType.prototype.onConfigure = function (w) { 67 | onConfigure?.apply(this, arguments); 68 | if (w?.widgets_values?.length) { 69 | outSet.call(this, w.widgets_values); 70 | } 71 | }; 72 | } 73 | // --- EG_WB_KSH Node 74 | }, 75 | }); 76 | -------------------------------------------------------------------------------- /json/egtscglds.json: -------------------------------------------------------------------------------- 1 | { 2 | "XLNegative": "(worst quality,low resolution,bad hands,open mouth),distorted,twisted,watermark," 3 | } -------------------------------------------------------------------------------- /nodes/EGJBCHBMQ.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import math 3 | class EGJBCH: 4 | @classmethod 5 | def INPUT_TYPES(s): 6 | return {"required": { 7 | "pixels": ("IMAGE",), 8 | "vae": ("VAE",), 9 | "mask": ("MASK",), 10 | "grow_mask_by": ("INT", {"default": 6, "min": 0, "max": 64, "step": 1}), 11 | "use_original_image": (["original", "filling"],), 12 | }} 13 | 14 | RETURN_TYPES = ("LATENT",) 15 | FUNCTION = "encode" 16 | CATEGORY = "2🐕/🤿Latent" 17 | 18 | def encode(self, vae, pixels, mask, grow_mask_by=6, use_original_image="filling"): 19 | x = (pixels.shape[1] // 8) * 8 20 | y = (pixels.shape[2] // 8) * 8 21 | mask = torch.nn.functional.interpolate(mask.reshape((-1, 1, mask.shape[-2], mask.shape[-1])), 22 | size=(pixels.shape[1], pixels.shape[2]), mode="bilinear") 23 | if use_original_image == "filling": 24 | pixels = pixels.clone() 25 | if pixels.shape[1] != x or pixels.shape[2] != y: 26 | x_offset = (pixels.shape[1] % 8) // 2 27 | y_offset = (pixels.shape[2] % 8) // 2 28 | pixels = pixels[:, x_offset:x + x_offset, y_offset:y + y_offset, :] 29 | mask = mask[:, :, x_offset:x + x_offset, y_offset:y + y_offset] 30 | # grow mask by a few pixels to keep things seamless in latent space 31 | if grow_mask_by == 0: 32 | mask_erosion = mask 33 | else: 34 | kernel_tensor = torch.ones((1, 1, grow_mask_by, grow_mask_by)) 35 | padding = math.ceil((grow_mask_by - 1) / 2) 36 | mask_erosion = torch.clamp(torch.nn.functional.conv2d(mask.round(), kernel_tensor, padding=padding), 0, 1) 37 | 38 | m = (1.0 - mask.round()).squeeze(1) 39 | if use_original_image == "filling": 40 | for i in range(3): 41 | pixels[:, :, :, i] -= 0.5 42 | pixels[:, :, :, i] *= m 43 | pixels[:, :, :, i] += 0.5 44 | t = vae.encode(pixels) 45 | return ({"samples": t, "noise_mask": (mask_erosion[:, :, :x, :y].round())},) 46 | -------------------------------------------------------------------------------- /nodes/EGJDFDHT.py: -------------------------------------------------------------------------------- 1 | class EGRYHT: 2 | @classmethod 3 | def INPUT_TYPES(s): 4 | return {"required": { 5 | "weight": ("FLOAT", { 6 | "default": 1, 7 | "min": 0, 8 | "max": 1, 9 | "step": 0.01, 10 | "display": "slider" 11 | }), 12 | }, 13 | "optional": {} 14 | } 15 | RETURN_TYPES = ("FLOAT",) 16 | FUNCTION = "run" 17 | CATEGORY = "2🐕/🔢number" 18 | INPUT_IS_LIST = False 19 | OUTPUT_IS_LIST = (False,) 20 | def run(self, weight): 21 | scaled_number = weight 22 | return (scaled_number,) 23 | 24 | -------------------------------------------------------------------------------- /nodes/EGLATENTBISC.py: -------------------------------------------------------------------------------- 1 | import torch 2 | class Args: 3 | def __init__(self): 4 | self.gpu_only = False 5 | args = Args() 6 | def intermediate_device(): 7 | if args.gpu_only: 8 | return get_torch_device() 9 | else: 10 | return torch.device("cpu") 11 | class EGKLATENT: 12 | ratios = { 13 | "1:1": (1, 1), 14 | "3:2": (3, 2), 15 | "16:9": (16, 9), 16 | "2:3": (2, 3), 17 | "9:16": (9, 16) 18 | } 19 | 20 | def __init__(self): 21 | self.device = intermediate_device() 22 | 23 | @classmethod 24 | def INPUT_TYPES(s): 25 | return {"required": {"width": ("INT", {"default": 512, "min": 16, "max": 4096, "step": 8}), 26 | "height": ("INT", {"default": 512, "min": 16, "max": 4096, "step": 8}), 27 | "ratio": (list(s.ratios.keys()), {"default": "1:1"}), 28 | "batch_size": ("INT", {"default": 1, "min": 1, "max": 4096}), 29 | "equal_scale": ("BOOLEAN", {"default": False}) 30 | }} 31 | 32 | RETURN_TYPES = ("LATENT", "INT", "INT") 33 | RETURN_NAMES = ('LATENT', 'width', 'height') 34 | FUNCTION = "generate" 35 | CATEGORY = "2🐕/🤿Latent" 36 | 37 | def generate(self, width, height, batch_size=1, ratio="1:1", equal_scale=False): 38 | if ratio not in self.ratios.keys(): 39 | raise ValueError(f"Invalid ratio value: {ratio}. Valid ratios are: {', '.join(self.ratios.keys())}") 40 | if not equal_scale: 41 | latent = torch.zeros([batch_size, 4, int(height // 8), int(width // 8)], device=self.device) 42 | return ({"samples": latent}, width, height) 43 | if width == height: 44 | max_dim = width 45 | ratio_width, ratio_height = self.ratios[ratio] 46 | if ratio_width >= ratio_height: 47 | width = max_dim 48 | height = int(max_dim * ratio_height / ratio_width) 49 | else: 50 | height = max_dim 51 | width = int(max_dim * ratio_width / ratio_height) 52 | else: 53 | max_dim = max(width, height) 54 | ratio_width, ratio_height = self.ratios[ratio] 55 | if width == max_dim: 56 | new_height = int(max_dim * ratio_height / ratio_width) 57 | height = new_height 58 | elif height == max_dim: 59 | new_width = int(max_dim * ratio_width / ratio_height) 60 | width = new_width 61 | 62 | latent = torch.zeros([batch_size, 4, int(height // 8), int(width // 8)], device=self.device) 63 | return ({"samples": latent}, width, height) 64 | -------------------------------------------------------------------------------- /nodes/EGSZHZ.py: -------------------------------------------------------------------------------- 1 | NAMESPACE='2🐕Int Float Text Swap' 2 | def is_context_empty(ctx): 3 | return not ctx or all(v is None for v in ctx.values()) 4 | def get_category(sub_dirs=None): 5 | if sub_dirs is None: 6 | return NAMESPACE 7 | else: 8 | return "{}/utils".format(NAMESPACE) 9 | def get_name(name): 10 | return '{} ({})'.format(name, NAMESPACE) 11 | class AnyType(str): 12 | def __ne__(self, __value: object) -> bool: 13 | return False 14 | any_type = AnyType("*") 15 | def is_none(value): 16 | if value is not None: 17 | if isinstance(value, dict) and 'model' in value and 'clip' in value: 18 | return is_context_empty(value) 19 | return value is None 20 | def convert_to_int(value): 21 | try: 22 | return int(value) 23 | except ValueError: 24 | return None 25 | def convert_to_float(value): 26 | try: 27 | return float(value) 28 | except ValueError: 29 | return None 30 | def convert_to_str(value): 31 | return str(value) 32 | class EG_SS_RYZH: 33 | NAME = get_name("Any Switch") 34 | CATEGORY = get_category() 35 | @classmethod 36 | def INPUT_TYPES(cls): 37 | return { 38 | "required": {"Any_input": (any_type,)}, 39 | "optional": {}, 40 | } 41 | RETURN_TYPES = (any_type, any_type, any_type) 42 | RETURN_NAMES = ('Int', 'Float', 'Text') 43 | FUNCTION = "switch" 44 | CATEGORY = "2🐕/🔢number" 45 | def switch(self, Any_input=None): 46 | if Any_input is None: 47 | return (None, None, None) 48 | 49 | int_output = convert_to_int(Any_input) 50 | float_output = convert_to_float(Any_input) 51 | str_output = convert_to_str(Any_input) 52 | 53 | return (int_output, float_output, str_output) 54 | 55 | NODE_CLASS_MAPPINGS = { "EG_SS_RYZH" : EG_SS_RYZH } 56 | NODE_DISPLAY_NAME_MAPPINGS = { "EG_SS_RYZH" : "2🐕Int Float Text Swap" } 57 | 58 | -------------------------------------------------------------------------------- /nodes/EGSZJDYS.py: -------------------------------------------------------------------------------- 1 | import torch 2 | class EGSZJDYS: 3 | def __init__(self): 4 | pass 5 | @classmethod 6 | def INPUT_TYPES(cls): 7 | return { 8 | "required": { 9 | "number1": ("STRING", { 10 | "multiline": True, 11 | "default": "" 12 | }), 13 | "operation": (["+", "-", "x", "÷"], {}), 14 | "number2": ("STRING", { 15 | "multiline": True, 16 | "default": "" 17 | }), 18 | } 19 | } 20 | RETURN_TYPES = ("INT", "FLOAT", "STRING") 21 | RETURN_NAMES = ("result_int", "result_float", "result_str") 22 | FUNCTION = "compute" 23 | CATEGORY = "2🐕/🔢number" 24 | 25 | def compute(self, number1, number2, operation): 26 | try: 27 | number1 = float(number1) 28 | number2 = float(number2) 29 | except ValueError: 30 | return (None, None, "Invalid input. Please enter a number.") 31 | 32 | if operation == "+": 33 | result = number1 + number2 34 | elif operation == "-": 35 | result = number1 - number2 36 | elif operation == "x": 37 | result = number1 * number2 38 | elif operation == "÷": 39 | if number2 == 0: 40 | return (None, None, "Cannot divide by zero.") 41 | result = number1 / number2 42 | else: 43 | return (None, None, "Invalid operation.") 44 | 45 | if result.is_integer(): 46 | result_str = str(int(result)) 47 | else: 48 | result_str = str(result) 49 | 50 | return (int(result), float(result), result_str) -------------------------------------------------------------------------------- /nodes/EGTXSFBLS.py: -------------------------------------------------------------------------------- 1 | import torch 2 | def common_upscale(samples, width, height, upscale_method, crop): 3 | if crop == "center": 4 | old_width = samples.shape[3] 5 | old_height = samples.shape[2] 6 | old_aspect = old_width / old_height 7 | new_aspect = width / height 8 | x = 0 9 | y = 0 10 | if old_aspect > new_aspect: 11 | x = round((old_width - old_width * (new_aspect / old_aspect)) / 2) 12 | elif old_aspect < new_aspect: 13 | y = round((old_height - old_height * (old_aspect / new_aspect)) / 2) 14 | s = samples[:,:,y:old_height-y,x:old_width-x] 15 | else: 16 | s = samples 17 | 18 | if upscale_method == "bislerp": 19 | return bislerp(s, width, height) 20 | elif upscale_method == "lanczos": 21 | return lanczos(s, width, height) 22 | else: 23 | return torch.nn.functional.interpolate(s, size=(height, width), mode=upscale_method) 24 | 25 | class EGTXSFBLSNode: 26 | upscale_methods = ["nearest-exact", "bilinear", "area", "bicubic", "lanczos"] 27 | crop_methods = ["disabled", "center"] 28 | @classmethod 29 | def INPUT_TYPES(s): 30 | return {"required": {"image": ("IMAGE",), 31 | "width": ("INT", {"default": 512, "min": 0, "max": 10000, "step": 1}), 32 | "height": ("INT", {"default": 512, "min": 0, "max": 10000, "step": 1}), 33 | "crop": (s.crop_methods,)}, 34 | "optional": {"upscale_method": (s.upscale_methods,), 35 | "lock_aspect_ratio": ("BOOLEAN", {"default": False}), 36 | } 37 | } 38 | RETURN_TYPES = ("IMAGE", "INT", "INT") 39 | RETURN_NAMES =('image', 'width', 'height') 40 | FUNCTION = "upscale" 41 | CATEGORY = "2🐕/🖼️Image" 42 | def upscale(self, image, upscale_method, width, height, crop, lock_aspect_ratio=False): 43 | if width == 0 and height == 0: 44 | s = image 45 | return_width = image.shape[3] 46 | return_height = image.shape[2] 47 | else: 48 | samples = image.movedim(-1,1) 49 | original_width, original_height = samples.shape[3], samples.shape[2] 50 | original_aspect = original_width / original_height 51 | if not lock_aspect_ratio: 52 | if width == 0: 53 | width = original_width 54 | if height == 0: 55 | height = original_height 56 | else: 57 | if width != 0 and height != 0: 58 | if width > height: 59 | height = max(1, round(width / original_aspect)) 60 | else: 61 | width = max(1, round(height * original_aspect)) 62 | elif width != 0 and height == 0: 63 | height = max(1, round(width / original_aspect)) 64 | elif width == 0 and height != 0: 65 | width = max(1, round(height * original_aspect)) 66 | s = common_upscale(samples, width, height, upscale_method, crop) 67 | s = s.movedim(1,-1) 68 | return_width = width 69 | return_height = height 70 | return (s, return_width, return_height) 71 | -------------------------------------------------------------------------------- /nodes/EGWBZYSRK.py: -------------------------------------------------------------------------------- 1 | import torch 2 | class EGZYWBKNode: 3 | def __init__(self): 4 | pass 5 | @classmethod 6 | def INPUT_TYPES(cls): 7 | return { 8 | "required": { 9 | "number1": ("STRING", { 10 | "multiline": True, 11 | "default": "" 12 | }), 13 | } 14 | } 15 | RETURN_TYPES = ("INT", "FLOAT", "STRING") 16 | RETURN_NAMES = ("result_int", "result_float", "result_str") 17 | FUNCTION = "convert_number_types" 18 | CATEGORY = "2🐕/🗒️Text" 19 | def convert_number_types(self, number1): 20 | try: 21 | float_num = float(number1) 22 | int_num = int(float_num) 23 | str_num = number1 24 | except ValueError: 25 | return (None, None, number1) 26 | return (int_num, float_num, str_num) 27 | NODE_CLASS_MAPPINGS = { "EG_ZY_WBK" : EGZYWBKNode } 28 | NODE_DISPLAY_NAME_MAPPINGS = { "EG_ZY_WBK" : "2🐕Free input box" } -------------------------------------------------------------------------------- /nodes/EGZZJDYHHT.py: -------------------------------------------------------------------------------- 1 | from PIL import Image, ImageFilter 2 | import torch 3 | import numpy as np 4 | 5 | def tensortopil(image): 6 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 7 | def piltotensor(image): 8 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 9 | 10 | class EGZZMHHT: 11 | @classmethod 12 | def INPUT_TYPES(s): 13 | return { 14 | "required": { 15 | "mask": ("MASK",), 16 | "Fuzzyintensity":("INT", {"default": 1, 17 | "min":0, 18 | "max": 150, 19 | "step": 1, 20 | "display": "slider"}) 21 | } 22 | } 23 | 24 | RETURN_TYPES = ('MASK',) 25 | FUNCTION = "maskmohu" 26 | CATEGORY = "2🐕/Mask/Fuzzy fast intensity" 27 | INPUT_IS_LIST = False 28 | OUTPUT_IS_LIST = (False,) 29 | def maskmohu(self,mask,Fuzzyintensity): 30 | print('SmoothMask',mask.shape) 31 | mask=tensortopil(mask) 32 | feathered_image = mask.filter(ImageFilter.GaussianBlur(Fuzzyintensity)) 33 | 34 | mask=piltotensor(feathered_image) 35 | 36 | return (mask,) 37 | -------------------------------------------------------------------------------- /nodes/EGZZTXHZ.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from PIL import Image 3 | import numpy as np 4 | def tensor2pil(image): 5 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 6 | def pil2tensor(image): 7 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 8 | def image2mask(image_pil): 9 | # Convert image to grayscale 10 | image_pil = image_pil.convert("L") 11 | # Convert grayscale image to binary mask 12 | threshold = 128 13 | mask_array = np.array(image_pil) > threshold 14 | return Image.fromarray((mask_array * 255).astype(np.uint8)) 15 | def mask2image(mask_pil): 16 | color_map = {0: (0, 0, 0), 17 | 255: (255, 255, 255)} 18 | color_image = Image.new('RGB', mask_pil.size, color=color_map[0]) 19 | for x in range(mask_pil.width): 20 | for y in range(mask_pil.height): 21 | color_image.putpixel((x, y), color_map[mask_pil.getpixel((x, y))]) 22 | return color_image 23 | class EGTXZZZHNode: 24 | def __init__(self): 25 | pass 26 | @classmethod 27 | def INPUT_TYPES(cls): 28 | return { 29 | "required": {}, 30 | "optional": { 31 | "mask_input": ("MASK", {}), 32 | "image_input": ("IMAGE", {}), 33 | }, 34 | } 35 | RETURN_TYPES = ("IMAGE", "MASK") 36 | RETURN_NAMES = ("output_image", "output_mask") 37 | FUNCTION = "convert_input" 38 | CATEGORY = "2🐕/⛱️Mask" 39 | def convert_input(self, image_input=None, mask_input=None): 40 | if image_input is None and mask_input is None: 41 | default_image = Image.new('L', (256, 256), color=255) 42 | default_mask = Image.new('L', (256, 256), color=0) 43 | image_tensor = pil2tensor(default_image) 44 | mask_tensor = pil2tensor(default_mask) 45 | return [image_tensor, mask_tensor] 46 | 47 | elif image_input is not None: 48 | input_image_pil = tensor2pil(image_input) 49 | output_mask_pil = image2mask(input_image_pil) 50 | output_image_pil = input_image_pil 51 | elif mask_input is not None: 52 | input_mask_pil = tensor2pil(mask_input) 53 | output_image_pil = mask2image(input_mask_pil) 54 | output_mask_pil = input_mask_pil 55 | 56 | output_image_tensor = pil2tensor(output_image_pil) 57 | output_mask_tensor = pil2tensor(output_mask_pil) 58 | 59 | return [output_image_tensor, output_mask_tensor] 60 | 61 | NODE_CLASS_MAPPINGS = { "EG_TXZZ_ZH" : EGTXZZZHNode } 62 | NODE_DISPLAY_NAME_MAPPINGS = { "EG_TXZZ_ZH" : "2🐕Mask image exchange" } 63 | -------------------------------------------------------------------------------- /nodes/egbdfy.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import hashlib 3 | import json 4 | 5 | NAMESPACE = '2🐕Baidu Translation API' 6 | APPID_API_KEY_FILE = 'baidukey.json' 7 | 8 | def get_category(sub_dirs=None): 9 | if sub_dirs is None: 10 | return NAMESPACE 11 | else: 12 | return "{}/{}".format(NAMESPACE, sub_dirs) 13 | def get_name(name): 14 | return '{} ({})'.format(name, NAMESPACE) 15 | class EGBDAPINode: 16 | NAME = get_name("Translation") 17 | CATEGORY = get_category() 18 | @classmethod 19 | def INPUT_TYPES(cls): 20 | return { 21 | "required": { 22 | "text": ("STRING", { 23 | "multiline": True, 24 | "default": "Free Baidu Translation API application website”https://fanyi-api.baidu.com/?ext_channel=Aldtype&fr=pcHeader“,Only the first time is required to input ID and KEY,More SD tutorials available on Bilibili @ 灵仙儿和二狗子🐕" 25 | }), 26 | }, 27 | "optional": { 28 | "appid": ("STRING", {}), 29 | "api_key": ("STRING", {}), 30 | "Translation_mode": (["zh-en", "en-zh"],) 31 | }, 32 | } 33 | RETURN_TYPES = ("STRING",) 34 | RETURN_NAMES = ('TEXT',) 35 | FUNCTION = "translate" 36 | CATEGORY = "2🐕/🗒️Text" 37 | def __init__(self, appid=None, api_key=None): 38 | self.appid = appid 39 | self.api_key = api_key 40 | self.load_credentials() 41 | def load_credentials(self): 42 | try: 43 | with open(APPID_API_KEY_FILE, 'r') as f: 44 | credentials = json.load(f) 45 | self.appid = credentials.get('appid', self.appid) 46 | self.api_key = credentials.get('api_key', self.api_key) 47 | except FileNotFoundError: 48 | pass 49 | except json.JSONDecodeError: 50 | print("Error decoding JSON credentials. Using default values.") 51 | def save_credentials(self): 52 | with open(APPID_API_KEY_FILE, 'w') as f: 53 | json.dump({'appid': self.appid, 'api_key': self.api_key}, f) 54 | def translate(self, text, appid=None, api_key=None, Translation_mode="zh-en"): 55 | 56 | if appid: 57 | self.appid = appid 58 | self.save_credentials() 59 | if api_key: 60 | self.api_key = api_key 61 | self.save_credentials() 62 | 63 | if not self.appid or not self.api_key: 64 | return ("Translation failed - missing appid or api_key",) 65 | 66 | url = "https://fanyi-api.baidu.com/api/trans/vip/translate" 67 | 68 | salt = '123456' 69 | sign = self.calculate_sign(text, salt, self.appid, self.api_key) 70 | params = { 71 | 'q': text, 72 | 'appid': self.appid, 73 | 'salt': salt, 74 | 'sign': sign 75 | } 76 | 77 | params['from'], params['to'] = Translation_mode.split('-') 78 | 79 | response = requests.get(url, params=params) 80 | 81 | print(response.text) 82 | 83 | if response.status_code == 200: 84 | try: 85 | response_json = response.json() 86 | 87 | if "error_code" in response_json: 88 | return ("Translation failed - error code: {}".format(response_json["error_code"]),) 89 | 90 | translation = response_json["trans_result"][0]["dst"] 91 | return (translation,) 92 | except KeyError as e: 93 | return ("Translation failed - KeyError: {}".format(e),) 94 | else: 95 | return ("Translation failed - status code: {}".format(response.status_code),) 96 | def calculate_sign(self, query, salt, appid, api_key): 97 | sign = f"{appid}{query}{salt}{api_key}" 98 | sign = sign.encode('utf-8') 99 | sign = hashlib.md5(sign).hexdigest() 100 | return sign 101 | 102 | -------------------------------------------------------------------------------- /nodes/egcchq.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from typing import Tuple 3 | 4 | class EGTXCCHQ: 5 | def __init__(self): 6 | pass 7 | 8 | @classmethod 9 | def INPUT_TYPES(cls): 10 | return { 11 | "required": { 12 | "image_in": ("IMAGE", {}), 13 | } 14 | } 15 | 16 | RETURN_TYPES = ("INT", "INT") 17 | RETURN_NAMES = ("width", "height") 18 | FUNCTION = "get_image_size" 19 | CATEGORY = "2🐕/🖼️Image" 20 | 21 | def get_image_size(self, image_in: torch.Tensor) -> Tuple[int, int]: 22 | if len(image_in.shape) == 4: 23 | height, width = image_in.shape[1], image_in.shape[2] 24 | else: 25 | height, width = image_in.shape[-2], image_in.shape[-1] 26 | return (width, height) 27 | 28 | -------------------------------------------------------------------------------- /nodes/egcgysqy.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Dict, Any 2 | import torch 3 | from PIL import Image 4 | import numpy as np 5 | from torchvision import transforms 6 | from skimage import exposure 7 | from skimage.transform import resize 8 | 9 | 10 | def tensor_to_pil(img_tensor, batch_index=0): 11 | 12 | img_tensor = img_tensor[batch_index].unsqueeze(0) 13 | i = 255. * img_tensor.cpu().numpy() 14 | img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8).squeeze()) 15 | return img 16 | 17 | 18 | 19 | def pil_to_tensor(image): 20 | 21 | image = np.array(image).astype(np.float32) / 255.0 22 | image = torch.from_numpy(image).unsqueeze(0) 23 | if len(image.shape) == 3: 24 | image = image.unsqueeze(-1) 25 | return image 26 | 27 | class EGSCQYQBQYNode: 28 | def __init__(self): 29 | pass 30 | 31 | @classmethod 32 | def INPUT_TYPES(cls): 33 | return { 34 | "required": { 35 | "source_image": ("IMAGE",), 36 | "target_image": ("IMAGE",), 37 | }, 38 | "optional": { 39 | "strength": ("FLOAT", { 40 | "default": 50, 41 | "min": 0, 42 | "max": 100, 43 | "step": 1, 44 | "precision": 100, 45 | "display": "slider" 46 | }), 47 | } 48 | } 49 | 50 | RETURN_TYPES = ("IMAGE",) 51 | RETURN_NAMES = ("result_image",) 52 | FUNCTION = "transfer_color" 53 | CATEGORY = "2🐕/🖼️Image/🎨Color processing" 54 | 55 | def transfer_color(self, source_image, target_image, strength=50): 56 | source_pil = tensor_to_pil(source_image) 57 | target_pil = tensor_to_pil(target_image) 58 | 59 | source_np = np.array(source_pil) 60 | target_np = np.array(target_pil) 61 | 62 | matched_target_np = np.empty_like(target_np) 63 | for i in range(source_np.shape[-1]): 64 | matched_target_np[:, :, i] = exposure.match_histograms( 65 | target_np[:, :, i], source_np[:, :, i] 66 | ) 67 | 68 | result_np = (1 - strength / 100) * target_np + (strength / 100) * matched_target_np 69 | result_pil = Image.fromarray(result_np.astype(np.uint8)) 70 | 71 | result_tensor = pil_to_tensor(result_pil) 72 | return (result_tensor,) 73 | -------------------------------------------------------------------------------- /nodes/egcjpjnode.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Dict, Any 2 | import torch 3 | from PIL import Image 4 | import numpy as np 5 | from torchvision import transforms 6 | 7 | def tensor2pil(image): 8 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 9 | 10 | def pil2tensor(image): 11 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 12 | 13 | class EGCJPJNode: 14 | def __init__(self): 15 | pass 16 | 17 | @classmethod 18 | def INPUT_TYPES(cls): 19 | return { 20 | "required": { 21 | "original_image": ("IMAGE",), 22 | "cropped_image": ("IMAGE",), 23 | "Crop_data": ("COORDS",), 24 | }, 25 | } 26 | 27 | RETURN_TYPES = ("IMAGE",) 28 | RETURN_NAMES = ("image",) 29 | FUNCTION = "resize_and_paste" 30 | CATEGORY = "2🐕/🔍Refinement processing" 31 | 32 | def resize_and_paste(self, original_image, cropped_image, Crop_data): 33 | original_image_pil = tensor2pil(original_image) 34 | cropped_image_pil = tensor2pil(cropped_image) 35 | 36 | if Crop_data is None: 37 | return (original_image,) 38 | 39 | 40 | y0, y1, x0, x1 = Crop_data 41 | 42 | target_width = x1 - x0 43 | target_height = y1 - y0 44 | 45 | cropped_image_pil = cropped_image_pil.resize((target_width, target_height)) 46 | 47 | original_image_pil.paste(cropped_image_pil, (x0, y0)) 48 | 49 | pasted_image_tensor = pil2tensor(original_image_pil) 50 | 51 | return (pasted_image_tensor,) 52 | -------------------------------------------------------------------------------- /nodes/egjfzz.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import scipy.ndimage 3 | import torch 4 | 5 | def grow(mask, expand, tapered_corners): 6 | c = 0 if tapered_corners else 1 7 | kernel = np.array([[c, 1, c], 8 | [1, 1, 1], 9 | [c, 1, c]]) 10 | mask = mask.reshape((-1, mask.shape[-2], mask.shape[-1])) 11 | out = [] 12 | for m in mask: 13 | output = m.numpy() 14 | for _ in range(abs(expand)): 15 | if expand < 0: 16 | output = scipy.ndimage.grey_erosion(output, footprint=kernel) 17 | else: 18 | output = scipy.ndimage.grey_dilation(output, footprint=kernel) 19 | output = torch.from_numpy(output) 20 | out.append(output) 21 | return torch.stack(out, dim=0) 22 | 23 | def combine(destination, source, x, y): 24 | output = destination.reshape((-1, destination.shape[-2], destination.shape[-1])).clone() 25 | source = source.reshape((-1, source.shape[-2], source.shape[-1])) 26 | 27 | left, top = (x, y,) 28 | right, bottom = (min(left + source.shape[-1], destination.shape[-1]), min(top + source.shape[-2], destination.shape[-2])) 29 | visible_width, visible_height = (right - left, bottom - top,) 30 | 31 | source_portion = source[:, :visible_height, :visible_width] 32 | destination_portion = destination[:, top:bottom, left:right] 33 | 34 | 35 | output[:, top:bottom, left:right] = destination_portion - source_portion 36 | 37 | output = torch.clamp(output, 0.0, 1.0) 38 | 39 | return output 40 | 41 | class EGJFZZSC: 42 | def __init__(self): 43 | pass 44 | 45 | @classmethod 46 | def INPUT_TYPES(cls): 47 | return { 48 | 49 | "required": { 50 | "mask": ("MASK",), 51 | "senerate_width": ("INT", { 52 | "default": 10, 53 | "min": 1, 54 | "max": 666, 55 | "step": 1 56 | }), 57 | "smooth": ("BOOLEAN", {"default": True}), 58 | }, 59 | 60 | "optional": {} 61 | } 62 | 63 | RETURN_TYPES = ("MASK",) 64 | RETURN_NAMES = ("mask",) 65 | FUNCTION = "run" 66 | CATEGORY = "2🐕/⛱️Mask" 67 | 68 | def run(self, mask, senerate_width, smooth): 69 | m1 = grow(mask, senerate_width, smooth) 70 | m2 = grow(mask, -senerate_width, smooth) 71 | m3 = combine(m1, m2, 0, 0) 72 | 73 | return (m3,) 74 | -------------------------------------------------------------------------------- /nodes/egjxfz.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Dict, Any 2 | import torch 3 | from PIL import Image 4 | import numpy as np 5 | from torchvision import transforms 6 | 7 | 8 | def tensor2pil(image): 9 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 10 | 11 | 12 | def pil2tensor(image): 13 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 14 | 15 | 16 | class EGJXFZNODE: 17 | def __init__(self): 18 | pass 19 | 20 | @classmethod 21 | def INPUT_TYPES(cls): 22 | return { 23 | "required": { 24 | "image": ("IMAGE",), 25 | "direction": (["level", "vertical"],), 26 | }, 27 | } 28 | 29 | RETURN_TYPES = ("IMAGE",) 30 | RETURN_NAMES = ("image",) 31 | FUNCTION = "image_flip" 32 | CATEGORY = "2🐕/🖼️Image" 33 | 34 | def image_flip(self, image, direction): 35 | batch_tensor = [] 36 | for image in image: 37 | image = tensor2pil(image) 38 | if direction == 'level': 39 | image = image.transpose(Image.FLIP_LEFT_RIGHT) 40 | elif direction == 'vertical': 41 | image = image.transpose(Image.FLIP_TOP_BOTTOM) 42 | batch_tensor.append(pil2tensor(image)) 43 | batch_tensor = torch.cat(batch_tensor, dim=0) 44 | return (batch_tensor, ) 45 | 46 | 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /nodes/egryqh.py: -------------------------------------------------------------------------------- 1 | NAMESPACE='2🐕Unrestricted switching' 2 | 3 | def is_context_empty(ctx): 4 | return not ctx or all(v is None for v in ctx.values()) 5 | 6 | def get_category(sub_dirs = None): 7 | if sub_dirs is None: 8 | return NAMESPACE 9 | else: 10 | return "{}/utils".format(NAMESPACE) 11 | 12 | def get_name(name): 13 | return '{} ({})'.format(name, NAMESPACE) 14 | 15 | class AnyType(str): 16 | 17 | def __ne__(self, __value: object) -> bool: 18 | return False 19 | 20 | 21 | any_type = AnyType("*") 22 | 23 | 24 | def is_none(value): 25 | if value is not None: 26 | if isinstance(value, dict) and 'model' in value and 'clip' in value: 27 | return is_context_empty(value) 28 | return value is None 29 | 30 | 31 | class EGRYQHNode: 32 | 33 | NAME = get_name("Any Switch") 34 | CATEGORY = get_category() 35 | 36 | @classmethod 37 | def INPUT_TYPES(cls): 38 | return { 39 | "required": {}, 40 | "optional": { 41 | "input01": (any_type,), 42 | "input02": (any_type,), 43 | "input03": (any_type,), 44 | "input04": (any_type,), 45 | "input05": (any_type,), 46 | "input06": (any_type,), 47 | }, 48 | } 49 | 50 | RETURN_TYPES = (any_type,) 51 | RETURN_NAMES = ('output',) 52 | FUNCTION = "switch" 53 | CATEGORY = "2🐕/🆎Choice" 54 | 55 | def switch(self, input01=None, input02=None, input03=None, input04=None, input05=None,input06=None): 56 | any_value = None 57 | if not is_none(input01): 58 | any_value = input01 59 | elif not is_none(input02): 60 | any_value = input02 61 | elif not is_none(input03): 62 | any_value = input03 63 | elif not is_none(input04): 64 | any_value = input04 65 | elif not is_none(input05): 66 | any_value = input05 67 | elif not is_none(input06): 68 | any_value = input06 69 | return (any_value,) 70 | 71 | -------------------------------------------------------------------------------- /nodes/egszqh.py: -------------------------------------------------------------------------------- 1 | NAMESPACE='2🐕Choice Switch' 2 | 3 | def is_context_empty(ctx): 4 | """Checks if the provided ctx is None or contains just None values.""" 5 | return not ctx or all(v is None for v in ctx.values()) 6 | 7 | def get_category(sub_dirs = None): 8 | if sub_dirs is None: 9 | return NAMESPACE 10 | else: 11 | return "{}/utils".format(NAMESPACE) 12 | 13 | def get_name(name): 14 | return '{} ({})'.format(name, NAMESPACE) 15 | 16 | class AnyType(str): 17 | """A special class that is always equal in not equal comparisons. Credit to pythongosssss""" 18 | 19 | def __ne__(self, __value: object) -> bool: 20 | return False 21 | 22 | 23 | any_type = AnyType("*") 24 | 25 | 26 | def is_none(value): 27 | """Checks if a value is none. Pulled out in case we want to expand what 'None' means.""" 28 | if value is not None: 29 | if isinstance(value, dict) and 'model' in value and 'clip' in value: 30 | return is_context_empty(value) 31 | return value is None 32 | 33 | 34 | class EGXZQHNode: 35 | """The any switch. """ 36 | 37 | NAME = get_name("Select output") 38 | CATEGORY = get_category() 39 | 40 | @classmethod 41 | def INPUT_TYPES(cls): 42 | return { 43 | "required": {}, 44 | "optional": { 45 | "input01": (any_type,), 46 | "input02": (any_type,), 47 | "input03": (any_type,), 48 | "input04": (any_type,), 49 | "input05": (any_type,), 50 | "input06": (any_type,), 51 | "choice": (["1", "2", "3", "4", "5", "6"],) 52 | }, 53 | } 54 | 55 | RETURN_TYPES = (any_type,) 56 | RETURN_NAMES = ('output',) 57 | FUNCTION = "switch" 58 | CATEGORY = "2🐕/🆎Choice" 59 | 60 | def switch(self, input01=None, input02=None, input03=None, input04=None, input05=None, input06=None, choice=None): 61 | """Chooses the item to output based on the user's selection.""" 62 | if choice is not None: 63 | if choice == "1": 64 | return (input01,) 65 | elif choice == "2": 66 | return (input02,) 67 | elif choice == "3": 68 | return (input03,) 69 | elif choice == "4": 70 | return (input04,) 71 | elif choice == "5": 72 | return (input05,) 73 | elif choice == "6": 74 | return (input06,) 75 | else: 76 | return (None,) 77 | 78 | -------------------------------------------------------------------------------- /nodes/egtjtxsy.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | import torch 4 | import sys 5 | from PIL import Image, ImageOps 6 | from torchvision import transforms as T 7 | from torchvision.transforms import functional as TF 8 | 9 | 10 | 11 | 12 | my_dir = os.path.dirname(os.path.abspath(__file__)) 13 | custom_nodes_dir = os.path.abspath(os.path.join(my_dir, '.')) 14 | comfy_dir = os.path.abspath(os.path.join(my_dir, '..')) 15 | sys.path.append(comfy_dir) 16 | 17 | from nodes import MAX_RESOLUTION 18 | 19 | 20 | def tensor2pil(image: torch.Tensor) -> Image.Image: 21 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 22 | 23 | 24 | def pil2tensor(image: Image.Image) -> torch.Tensor: 25 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 26 | 27 | def common_upscale(samples, Scale_width, Scale_height, upscale_method, crop): 28 | if crop == "center": 29 | old_Scale_width = samples.shape[3] 30 | old_Scale_height = samples.shape[2] 31 | old_aspect = old_Scale_width / old_Scale_height 32 | new_aspect = Scale_width / Scale_height 33 | x = 0 34 | y = 0 35 | if old_aspect > new_aspect: 36 | x = round((old_Scale_width - old_Scale_width * (new_aspect / old_aspect)) / 2) 37 | elif old_aspect < new_aspect: 38 | y = round((old_Scale_height - old_Scale_height * (old_aspect / new_aspect)) / 2) 39 | s = samples[:,:,y:old_Scale_height-y,x:old_Scale_width-x] 40 | else: 41 | s = samples 42 | 43 | if upscale_method == "bislerp": 44 | return bislerp(s, Scale_width, Scale_height) 45 | elif upscale_method == "lanczos": 46 | return lanczos(s, Scale_width, Scale_height) 47 | else: 48 | return torch.nn.functional.interpolate(s, size=(Scale_height, Scale_width), mode=upscale_method) 49 | 50 | 51 | class EGCPSYTJNode: 52 | 53 | @classmethod 54 | def INPUT_TYPES(cls): 55 | return { 56 | "required": { 57 | "original_image": ("IMAGE",), 58 | "Watermark_image": ("IMAGE",), 59 | "Zoom_mode": (["None", "Fit", "zoom", "Scale_according_to_input_width_and_height"],), 60 | "Scaling_method": (["nearest-exact", "bilinear", "area"],), 61 | "Scaling_factor": ("FLOAT", {"default": 1, "min": 0.01, "max": 16.0, "step": 0.1}), 62 | "Scale_width": ("INT", {"default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 64}), 63 | "Scale_height": ("INT", {"default": 512, "min": 0, "max": MAX_RESOLUTION, "step": 64}), 64 | "initial_position": (["Centered", "Up", "Down", "Left", "Right", "Up Left", "Up Right", "Down Left", "Down Right"],), 65 | "X_direction": ("INT", {"default": 0, "min": -48000, "max": 48000, "step": 10}), 66 | "Y_direction": ("INT", {"default": 0, "min": -48000, "max": 48000, "step": 10}), 67 | "rotate": ("INT", {"default": 0, "min": -180, "max": 180, "step": 5}), 68 | "transparency": ("FLOAT", {"default": 0, "min": 0, "max": 100, "step": 5, "display": "slider"}), 69 | }, 70 | "optional": {"mask": ("MASK",),} 71 | } 72 | 73 | RETURN_TYPES = ("IMAGE",) 74 | FUNCTION = "apply_Watermark_image" 75 | CATEGORY = "2🐕/🔖Watermark addition" 76 | 77 | def apply_Watermark_image(self, original_image, Watermark_image, Zoom_mode, Scaling_method, Scaling_factor, 78 | Scale_width, Scale_height, X_direction, Y_direction, rotate, transparency, initial_position, mask=None): 79 | 80 | 81 | size = Scale_width, Scale_height 82 | location = X_direction, Y_direction 83 | mask = mask 84 | 85 | 86 | if Zoom_mode != "None": 87 | 88 | Watermark_image_size = Watermark_image.size() 89 | Watermark_image_size = (Watermark_image_size[2], Watermark_image_size[1]) 90 | if Zoom_mode == "Fit": 91 | h_ratio = original_image.size()[1] / Watermark_image_size[1] 92 | w_ratio = original_image.size()[2] / Watermark_image_size[0] 93 | ratio = min(h_ratio, w_ratio) 94 | Watermark_image_size = tuple(round(dimension * ratio) for dimension in Watermark_image_size) 95 | elif Zoom_mode == "zoom": 96 | Watermark_image_size = tuple(int(dimension * Scaling_factor) for dimension in Watermark_image_size) 97 | elif Zoom_mode == "Scale_according_to_input_width_and_height": 98 | Watermark_image_size = (size[0], size[1]) 99 | 100 | samples = Watermark_image.movedim(-1, 1) 101 | Watermark_image =common_upscale(samples, Watermark_image_size[0], Watermark_image_size[1], Scaling_method, False) 102 | Watermark_image = Watermark_image.movedim(1, -1) 103 | 104 | Watermark_image = tensor2pil(Watermark_image) 105 | 106 | 107 | Watermark_image = Watermark_image.convert('RGBA') 108 | Watermark_image.putalpha(Image.new("L", Watermark_image.size, 255)) 109 | 110 | 111 | if mask is not None: 112 | 113 | mask = tensor2pil(mask) 114 | mask = mask.resize(Watermark_image.size) 115 | 116 | Watermark_image.putalpha(ImageOps.invert(mask)) 117 | 118 | 119 | Watermark_image = Watermark_image.rotate(rotate, expand=True) 120 | 121 | 122 | r, g, b, a = Watermark_image.split() 123 | a = a.point(lambda x: max(0, int(x * (1 - transparency / 100)))) 124 | Watermark_image.putalpha(a) 125 | 126 | 127 | print(f"Alignment value received: {initial_position}") 128 | 129 | 130 | print(f"Base Image Size: {original_image.size()}") 131 | 132 | print(f"Overlay Image Size: {Watermark_image.size}") 133 | 134 | original_image_Scale_width, original_image_Scale_height = original_image.size()[2], original_image.size()[1] 135 | Watermark_image_Scale_width, Watermark_image_Scale_height = Watermark_image.size 136 | 137 | print(f"Original X_direction: {X_direction}, Y_direction: {Y_direction}") 138 | 139 | 140 | X_direction_int = None 141 | Y_direction_int = None 142 | 143 | if initial_position == "Centered": 144 | X_direction_int = int(X_direction + (original_image_Scale_width - Watermark_image_Scale_width) / 2) 145 | Y_direction_int = int(Y_direction + (original_image_Scale_height - Watermark_image_Scale_height) / 2) 146 | elif initial_position == "Up": 147 | X_direction_int = int(X_direction + (original_image_Scale_width - Watermark_image_Scale_width) / 2) 148 | Y_direction_int = Y_direction 149 | elif initial_position == "Down": 150 | X_direction_int = int(X_direction + (original_image_Scale_width - Watermark_image_Scale_width) / 2) 151 | Y_direction_int = int(Y_direction + original_image_Scale_height - Watermark_image_Scale_height) 152 | elif initial_position == "Left": 153 | Y_direction_int = int(Y_direction + (original_image_Scale_height - Watermark_image_Scale_height) / 2) 154 | X_direction_int = X_direction 155 | elif initial_position == "Right": 156 | X_direction_int = int(X_direction + original_image_Scale_width - Watermark_image_Scale_width) 157 | Y_direction_int = int(Y_direction + (original_image_Scale_height - Watermark_image_Scale_height) / 2) 158 | elif initial_position == "Up Left": 159 | pass 160 | elif initial_position == "Up Right": 161 | X_direction_int = int(original_image_Scale_width - Watermark_image_Scale_width + X_direction) 162 | Y_direction_int = Y_direction 163 | elif initial_position == "Down Left": 164 | X_direction_int = X_direction 165 | Y_direction_int = int(original_image_Scale_height - Watermark_image_Scale_height + Y_direction) 166 | elif initial_position == "Down Right": 167 | X_direction_int = int(X_direction + original_image_Scale_width - Watermark_image_Scale_width) 168 | Y_direction_int = int(Y_direction + original_image_Scale_height - Watermark_image_Scale_height) 169 | 170 | if X_direction_int is not None and Y_direction_int is not None: 171 | 172 | location = X_direction_int, Y_direction_int 173 | else: 174 | 175 | location = X_direction, Y_direction 176 | 177 | 178 | original_image_list = torch.unbind(original_image, dim=0) 179 | 180 | 181 | processed_original_image_list = [] 182 | for tensor in original_image_list: 183 | 184 | image = tensor2pil(tensor) 185 | 186 | 187 | if mask is None: 188 | image.paste(Watermark_image, location) 189 | else: 190 | image.paste(Watermark_image, location, Watermark_image) 191 | 192 | 193 | processed_tensor = pil2tensor(image) 194 | 195 | 196 | processed_original_image_list.append(processed_tensor) 197 | 198 | 199 | original_image = torch.stack([tensor.squeeze() for tensor in processed_original_image_list]) 200 | 201 | 202 | return (original_image,) 203 | -------------------------------------------------------------------------------- /nodes/egtscdscjnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSCJLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Background', 'Sky', 'Indoor', 'Outdoor', 'Building', 'Scene Atmosphere', 'Architect'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSCJLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSCJLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscdsdgnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSDGLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Light perception', 'lighting'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSDGLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSDGLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscdsfgnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSFGLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Graphic effects','Art style','Theme','Art style','Art unconventional','Illustration style','Artist','Film director','Coding method'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSFGLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSFGLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscdsjtnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSJTLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Perspective', 'Positioning', 'Action', 'Composition Method', 'Character Lens','Lens', 'Camera Lens'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSJTLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSJTLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscdsqtnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSQTLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Color', 'Rare Colors','Twelve Constellations', 'Magic Elements'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSQTLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSQTLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscdsrwnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSRWLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Type', 'Character in the work', 'Face', 'Rare hairstyle', 'Rare hairstyle man', 'Upper body decoration', 'Lower body decoration', 'Full body decoration', 'Hair', 'Head accessories','Ears', 'Neck', 'Clothing', 'Shoes and socks'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSRWLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSRWLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscdssjdiy.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGSJNode: 6 | JSON_FILE_PATH = 'options.json' 7 | @classmethod 8 | def load_category_keys(cls): 9 | 10 | current_dir = os.path.dirname(os.path.abspath(__file__)) 11 | 12 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 13 | 14 | json_dir = os.path.join(parent_dir, 'json') 15 | 16 | json_file_path = os.path.join(json_dir, cls.JSON_FILE_PATH) 17 | 18 | 19 | with open(json_file_path, 'r', encoding='utf-8') as f: 20 | options = json.load(f) 21 | cls.CATEGORY_KEYS = list(options.keys()) 22 | 23 | @classmethod 24 | def INPUT_TYPES(cls): 25 | if not hasattr(cls, 'CATEGORY_KEYS'): 26 | cls.load_category_keys() 27 | 28 | input_types = { 29 | "optional": { 30 | 31 | "Custom_Type1": (["None"] + cls.CATEGORY_KEYS, {"default": "None"}), 32 | "weight1": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}), 33 | "Custom_Type2": (["None"] + cls.CATEGORY_KEYS, {"default": "None"}), 34 | "weight2": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}), 35 | "Custom_Type3": (["None"] + cls.CATEGORY_KEYS, {"default": "None"}), 36 | "weight3": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}), 37 | "Custom_Type4": (["None"] + cls.CATEGORY_KEYS, {"default": "None"}), 38 | "weight4": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}), 39 | "Custom_Type5": (["None"] + cls.CATEGORY_KEYS, {"default": "None"}), 40 | "weight5": ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}), 41 | "seed": ("INT", {"default": 0, "min": -1125899906842624, "max": 1125899906842624}), 42 | }, 43 | "required": { 44 | } 45 | } 46 | return input_types 47 | 48 | @staticmethod 49 | def get_options_keys(key): 50 | 51 | current_dir = os.path.dirname(os.path.abspath(__file__)) 52 | 53 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 54 | 55 | json_dir = os.path.join(parent_dir, 'json') 56 | 57 | json_file_path = os.path.join(json_dir, EGSJNode.JSON_FILE_PATH) 58 | 59 | 60 | with open(json_file_path, 'r', encoding='utf-8') as f: 61 | options = json.load(f) 62 | return list(options[key].keys()) 63 | 64 | RETURN_TYPES = ("STRING",) 65 | RETURN_NAMES = ("prompt",) 66 | FUNCTION = "generate_prompt" 67 | CATEGORY = "2🐕/🏷️Prompt word master/🔀random class" 68 | 69 | def __init__(self): 70 | self.load_json() 71 | 72 | def load_json(self): 73 | 74 | current_dir = os.path.dirname(os.path.abspath(__file__)) 75 | 76 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 77 | 78 | json_dir = os.path.join(parent_dir, 'json') 79 | 80 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 81 | 82 | 83 | with open(json_file_path, 'r', encoding='utf-8') as f: 84 | self.options = json.load(f) 85 | 86 | def generate_prompt(self, **kwargs): 87 | prompt_parts = [] 88 | for i in range(1, 6): 89 | selected_key = kwargs.get(f"Custom_Type{i}") 90 | weight = kwargs.get(f"weight{i}", 1.0) 91 | if selected_key not in self.CATEGORY_KEYS: 92 | continue 93 | 94 | options_keys = [k for k in self.get_options_keys(selected_key) if k != "None"] 95 | if options_keys: 96 | random_choice = random.choice(options_keys) 97 | 98 | if random_choice != "None": 99 | 100 | if weight != 1: 101 | prompt_parts.append(f"({self.options[selected_key][random_choice]}:{weight:.1f})") 102 | else: 103 | prompt_parts.append(self.options[selected_key][random_choice]) 104 | prompt = ','.join(prompt_parts).strip() 105 | prompt += ',' 106 | return (prompt,) if prompt else ('',) 107 | 108 | 109 | 110 | 111 | 112 | 113 | -------------------------------------------------------------------------------- /nodes/egtscdswpnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSWPLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Items', 'Flowers', 'Food', 'Printing Materials', 'Physical Materials'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSWPLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSWPLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscdszlnode.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | import random 4 | 5 | class EGTSCDSZLLNode: 6 | JSON_FILE_PATH = 'options.json' 7 | CATEGORY_KEYS = ['Image type', 'Renderer', 'Positive prompt word', 'Negative prompt word'] 8 | 9 | def __init__(self): 10 | self.load_json() 11 | def load_json(self): 12 | 13 | current_dir = os.path.dirname(os.path.abspath(__file__)) 14 | 15 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 16 | 17 | json_dir = os.path.join(parent_dir, 'json') 18 | 19 | json_file_path = os.path.join(json_dir, self.JSON_FILE_PATH) 20 | 21 | try: 22 | with open(json_file_path, 'r', encoding='utf-8') as f: 23 | self.options = json.load(f) 24 | except Exception as e: 25 | print(f"Error reading JSON file: {e}") 26 | 27 | @classmethod 28 | def INPUT_TYPES(cls): 29 | return { 30 | "required": { 31 | **cls.get_input_types_from_keys(cls.CATEGORY_KEYS), 32 | "random": (["yes", "no"], {"default": "no"}), 33 | "seed": ("INT", {"default": 0,"min": -1125899906842624,"max": 1125899906842624}), 34 | } 35 | } 36 | 37 | @staticmethod 38 | def get_input_types_from_keys(keys): 39 | input_types = {} 40 | for key in keys: 41 | input_types[key] = (tuple(EGTSCDSZLLNode.get_options_keys(key)), {"default": "None"}) 42 | input_types[f"{key}weight"] = ("FLOAT", {"default": 1.0, "min": 0.1, "max": 2, "step": 0.1, "display": "slider"}) 43 | return input_types 44 | 45 | @staticmethod 46 | def get_options_keys(key): 47 | current_dir = os.path.dirname(os.path.abspath(__file__)) 48 | parent_dir = os.path.abspath(os.path.join(current_dir, '..')) 49 | json_dir = os.path.join(parent_dir, 'json') 50 | json_file_path = os.path.join(json_dir, EGTSCDSZLLNode.JSON_FILE_PATH) 51 | 52 | with open(json_file_path, 'r', encoding='utf-8') as f: 53 | options = json.load(f) 54 | return list(options[key].keys()) 55 | 56 | RETURN_TYPES = ("STRING",) 57 | RETURN_NAMES = ("prompt",) 58 | FUNCTION = "generate_prompt" 59 | CATEGORY = "2🐕/🏷️Prompt word master/📌Fixed" 60 | 61 | def generate_prompt(self, **kwargs): 62 | prompt_parts = {} 63 | for key in self.CATEGORY_KEYS: 64 | if key in kwargs and kwargs[key] in self.options[key] and kwargs[key] != "None": 65 | weight_key = f"{key}weight" 66 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 67 | if weight != 1: 68 | prompt_parts[key] = f"({self.options[key][kwargs[key]]}:{weight:.1f})" 69 | else: 70 | prompt_parts[key] = self.options[key][kwargs[key]] 71 | 72 | if kwargs.get("random") == "yes": 73 | Optional = list(self.options[key].keys()) 74 | Optional.remove("None") 75 | Random_selection = random.choice(Optional) 76 | weight_key = f"{key}weight" 77 | weight = kwargs[weight_key] if weight_key in kwargs and kwargs[weight_key] is not None else 1 78 | if weight != 1: 79 | prompt_parts[key] = f"({self.options[key][Random_selection]}:{weight:.1f})" 80 | else: 81 | prompt_parts[key] = self.options[key][Random_selection] 82 | 83 | prompt_parts = {k: v for k, v in prompt_parts.items() if v} 84 | prompt = ','.join(prompt_parts.values()).strip() 85 | prompt += ',' 86 | return (prompt,) if prompt else ('',) 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /nodes/egtscmb.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | 4 | class EGTSCMBGLNode: 5 | JSON_FILE_PATH = '../json/egtscglds.json' 6 | @classmethod 7 | def load_options(cls): 8 | current_dir = os.path.dirname(os.path.abspath(__file__)) 9 | json_file_path = os.path.join(current_dir, cls.JSON_FILE_PATH) 10 | directory = os.path.dirname(json_file_path) 11 | 12 | 13 | if not os.path.exists(directory): 14 | os.makedirs(directory) 15 | 16 | 17 | if os.path.exists(json_file_path): 18 | with open(json_file_path, 'r', encoding='utf-8') as f: 19 | try: 20 | cls.options = json.load(f) 21 | except json.JSONDecodeError: 22 | print("Template file format error, a new template file will be created for you。") 23 | cls.options = {} 24 | else: 25 | print("The template file does not exist. We will create a new template file for you。") 26 | cls.options = {} 27 | cls.save_options() 28 | @classmethod 29 | def save_options(cls): 30 | current_dir = os.path.dirname(os.path.abspath(__file__)) 31 | json_file_path = os.path.join(current_dir, cls.JSON_FILE_PATH) 32 | directory = os.path.dirname(json_file_path) 33 | 34 | 35 | if not os.path.exists(directory): 36 | os.makedirs(directory) 37 | 38 | 39 | with open(json_file_path, 'w', encoding='utf-8') as f: 40 | json.dump(cls.options, f, ensure_ascii=False, indent=4) 41 | 42 | @classmethod 43 | def INPUT_TYPES(cls): 44 | cls.load_options() 45 | keys_list = list(cls.options.keys()) 46 | default_key = keys_list[0] if keys_list else 'None' 47 | return { 48 | "optional": { 49 | "Read": (keys_list, {"default": default_key}), 50 | "New_Name": ("STRING", {"default": "Please enter a name"}), 51 | "New_Content": ("STRING", {"default": "Please enter the content"}), 52 | "Function": (["Read", "New", "Delete"], {"default": "Read"}), 53 | }, 54 | "required": { 55 | } 56 | } 57 | 58 | RETURN_TYPES = ("STRING",) 59 | RETURN_NAMES = ("Text",) 60 | FUNCTION = "process_action" 61 | CATEGORY = "2🐕/🏷️Prompt word master/✍️custom" 62 | 63 | def process_action(self, Read='None', New_Name='Please enter a name', New_Content='Please enter the content', Function='Read'): 64 | self.load_options() 65 | if Function == 'New': 66 | print("2🐕Successfully saved for you, more SD tutorials are available at B站@灵仙儿和二狗子") 67 | self.options[New_Name] = New_Content 68 | self.save_options() 69 | 70 | self.load_options() 71 | return ("2🐕Successfully saved for you, more SD tutorials are available at B站@灵仙儿和二狗子",) 72 | elif Function == 'Delete': 73 | if Read in self.options: 74 | print("2🐕Successfully deleted for you, more SD tutorials are available at B站@灵仙儿和二狗子") 75 | del self.options[Read] 76 | self.save_options() 77 | 78 | self.load_options() 79 | return ("2🐕Successfully deleted for you, more SD tutorials are available at B站@灵仙儿和二狗子",) 80 | else: 81 | return ("2🐕We have checked that the template does not exist for you. More SD tutorials are available at B站@灵仙儿和二狗子",) 82 | print("2🐕We have checked that the template does not exist for you. More SD tutorials are available at B站@灵仙儿和二狗子") 83 | elif Function == 'Read': 84 | if not Read or Read not in self.options: 85 | print("2🐕We have checked that the template does not exist for you. More SD tutorials are available at B站@灵仙儿和二狗子") 86 | return ("2🐕We have checked that the template does not exist for you. More SD tutorials are available at B站@灵仙儿和二狗子",) 87 | else: 88 | 89 | print(f"2🐕We have successfully read it for you. The template content is as follows. More SD tutorials are available at B站@灵仙儿和二狗子:") 90 | print(self.options[Read]) 91 | 92 | return (self.options[Read],) 93 | else: 94 | return ("2🐕Operation error, please refresh the page. More SD tutorials are available at B站@灵仙儿和二狗子",) 95 | print("2🐕Operation error, please refresh the page. More SD tutorials are available at @灵仙儿和二狗子") 96 | 97 | 98 | 99 | 100 | 101 | -------------------------------------------------------------------------------- /nodes/egtxcglj.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import torch 3 | from PIL import Image, ImageFilter 4 | 5 | specific_filters = [ 6 | 'BLUR', 7 | 'CONTOUR', 8 | 'DETAIL', 9 | 'EDGE_ENHANCE', 10 | 'EDGE_ENHANCE_MORE', 11 | 'EMBOSS', 12 | 'FIND_EDGES', 13 | 'GaussianBlur', 14 | 'MaxFilter', 15 | 'MedianFilter', 16 | 'MinFilter', 17 | 'ModeFilter', 18 | 'SHARPEN', 19 | 'SMOOTH', 20 | 'SMOOTH_MORE', 21 | 'UnsharpMask' 22 | ] 23 | 24 | 25 | class EGTXLJNode: 26 | RETURN_TYPES = ("IMAGE",) 27 | FUNCTION = "apply_filter" 28 | CATEGORY = "2🐕/🖼️Image/🪞Filter" 29 | 30 | @classmethod 31 | def INPUT_TYPES(s): 32 | return { 33 | "required": { 34 | "image": ("IMAGE",), 35 | "filter_type": (specific_filters,), 36 | }, 37 | } 38 | 39 | def apply_filter( 40 | self, 41 | image: torch.Tensor, 42 | filter_type: str, 43 | ): 44 | if filter_type not in specific_filters: 45 | raise ValueError(f"Unknown filter type: {filter_type}") 46 | 47 | image_pil = Image.fromarray((image[0].numpy() * 255).astype(np.uint8)) 48 | 49 | try: 50 | filter_instance = getattr(ImageFilter, filter_type) 51 | except AttributeError: 52 | filter_method = getattr(image_pil, filter_type) 53 | if callable(filter_method): 54 | filter_instance = filter_method() 55 | else: 56 | raise ValueError(f"Unknown filter type: {filter_type}") 57 | 58 | try: 59 | image_pil = image_pil.filter(filter_instance) 60 | except TypeError: 61 | filter_method = getattr(image_pil, filter_type) 62 | if callable(filter_method): 63 | default_params = filter_method.__defaults__ 64 | if default_params: 65 | filter_instance = filter_method(*default_params) 66 | image_pil = image_pil.filter(filter_instance) 67 | else: 68 | raise TypeError(f"Filter {filter_type} requires arguments but no default parameters are provided.") 69 | else: 70 | raise TypeError(f"Unknown filter type: {filter_type}") 71 | 72 | image_tensor = torch.from_numpy(np.array(image_pil).astype(np.float32) / 255).unsqueeze(0) 73 | 74 | return (image_tensor,) 75 | 76 | -------------------------------------------------------------------------------- /nodes/egtxljbc.py: -------------------------------------------------------------------------------- 1 | from PIL import Image, ImageOps, ImageSequence 2 | from PIL.PngImagePlugin import PngInfo 3 | import os 4 | import numpy as np 5 | import json 6 | import sys 7 | from comfy.cli_args import args 8 | import folder_paths 9 | 10 | 11 | current_dir = os.path.dirname(__file__) 12 | 13 | grandparent_dir = os.path.abspath(os.path.join(current_dir, '..', '..')) 14 | 15 | sys.path.append(grandparent_dir) 16 | 17 | from comfy.cli_args import args 18 | 19 | class EGTXBCLJBCNode: 20 | def __init__(self): 21 | self.output_dir = folder_paths.get_output_directory() 22 | self.type = "output" 23 | self.prefix_append = "" 24 | self.compress_level = 4 25 | @classmethod 26 | def INPUT_TYPES(s): 27 | return {"required": 28 | {"images": ("IMAGE", ), 29 | "filename_prefix": ("STRING", {"default": "ComfyUI"}), 30 | "custom_output_dir": ("STRING", {"default": "", "optional": True})}, 31 | "hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"}, 32 | } 33 | RETURN_TYPES = () 34 | FUNCTION = "save_images" 35 | OUTPUT_NODE = True 36 | CATEGORY = "2🐕/🖼️Image" 37 | def save_images(self, images, filename_prefix="ComfyUI", prompt=None, extra_pnginfo=None, custom_output_dir=""): 38 | 39 | default_results = self._save_images_to_dir(images, filename_prefix, prompt, extra_pnginfo, self.output_dir) 40 | 41 | 42 | if custom_output_dir: 43 | self._save_images_to_dir(images, filename_prefix, prompt, extra_pnginfo, custom_output_dir) 44 | 45 | 46 | return { "ui": { "images": default_results } } 47 | def _save_images_to_dir(self, images, filename_prefix, prompt, extra_pnginfo, output_dir): 48 | results = list() 49 | full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, output_dir, images[0].shape[1], images[0].shape[0]) 50 | 51 | for (batch_number, image) in enumerate(images): 52 | i = 255. * image.cpu().numpy() 53 | img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8)) 54 | metadata = None 55 | if not args.disable_metadata: 56 | metadata = PngInfo() 57 | if prompt is not None: 58 | metadata.add_text("prompt", json.dumps(prompt)) 59 | if extra_pnginfo is not None: 60 | for x in extra_pnginfo: 61 | metadata.add_text(x, json.dumps(extra_pnginfo[x])) 62 | filename_with_batch_num = filename.replace("%batch_num%", str(batch_number)) 63 | file = f"{filename_with_batch_num}_{counter:05}_.png" 64 | img.save(os.path.join(full_output_folder, file), pnginfo=metadata, compress_level=self.compress_level) 65 | 66 | 67 | display_path = os.path.join(output_dir, subfolder) 68 | results.append({ 69 | "filename": file, 70 | "subfolder": display_path, 71 | "type": self.type 72 | }) 73 | counter += 1 74 | 75 | return results 76 | 77 | -------------------------------------------------------------------------------- /nodes/egtxwhlj.py: -------------------------------------------------------------------------------- 1 | import os 2 | import numpy as np 3 | from PIL import Image 4 | import torch 5 | from typing import Union, List 6 | import subprocess 7 | try: 8 | import pilgram 9 | except ImportError: 10 | subprocess.check_call(['pip', 'install', 'pilgram']) 11 | def tensor2pil(image): 12 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 13 | def pil2tensor(image): 14 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 15 | class EGWHLJ: 16 | def __init__(self): 17 | pass 18 | @classmethod 19 | def INPUT_TYPES(cls): 20 | return { 21 | "required": { 22 | "image": ("IMAGE",), 23 | "style": ([ 24 | "1977", 25 | "aden", 26 | "brannan", 27 | "brooklyn", 28 | "clarendon", 29 | "earlybird", 30 | "gingham", 31 | "hudson", 32 | "inkwell", 33 | "kelvin", 34 | "lark", 35 | "lofi", 36 | "maven", 37 | "mayfair", 38 | "moon", 39 | "nashville", 40 | "perpetua", 41 | "reyes", 42 | "rise", 43 | "slumber", 44 | "stinson", 45 | "toaster", 46 | "valencia", 47 | "walden", 48 | "willow", 49 | "xpro2" 50 | ],), 51 | }, 52 | "optional": { 53 | "All": ("BOOLEAN", {"default": False}), 54 | }, 55 | } 56 | RETURN_TYPES = ("IMAGE",) 57 | FUNCTION = "image_style_filter" 58 | CATEGORY = "2🐕/🖼️Image/🪞Filter" 59 | def image_style_filter(self, image, style, All=False): 60 | if All: 61 | tensors = [] 62 | for img in image: 63 | for filter_name in self.INPUT_TYPES()['required']['style'][0]: 64 | if filter_name == "1977": 65 | tensors.append(pil2tensor(pilgram._1977(tensor2pil(img)))) 66 | elif filter_name == "aden": 67 | tensors.append(pil2tensor(pilgram.aden(tensor2pil(img)))) 68 | elif filter_name == "brannan": 69 | tensors.append(pil2tensor(pilgram.brannan(tensor2pil(img)))) 70 | elif filter_name == "brooklyn": 71 | tensors.append(pil2tensor(pilgram.brooklyn(tensor2pil(img)))) 72 | elif filter_name == "clarendon": 73 | tensors.append(pil2tensor(pilgram.clarendon(tensor2pil(img)))) 74 | elif filter_name == "earlybird": 75 | tensors.append(pil2tensor(pilgram.earlybird(tensor2pil(img)))) 76 | elif filter_name == "gingham": 77 | tensors.append(pil2tensor(pilgram.gingham(tensor2pil(img)))) 78 | elif filter_name == "hudson": 79 | tensors.append(pil2tensor(pilgram.hudson(tensor2pil(img)))) 80 | elif filter_name == "inkwell": 81 | tensors.append(pil2tensor(pilgram.inkwell(tensor2pil(img)))) 82 | elif filter_name == "kelvin": 83 | tensors.append(pil2tensor(pilgram.kelvin(tensor2pil(img)))) 84 | elif filter_name == "lark": 85 | tensors.append(pil2tensor(pilgram.lark(tensor2pil(img)))) 86 | elif filter_name == "lofi": 87 | tensors.append(pil2tensor(pilgram.lofi(tensor2pil(img)))) 88 | elif filter_name == "maven": 89 | tensors.append(pil2tensor(pilgram.maven(tensor2pil(img)))) 90 | elif filter_name == "mayfair": 91 | tensors.append(pil2tensor(pilgram.mayfair(tensor2pil(img)))) 92 | elif filter_name == "moon": 93 | tensors.append(pil2tensor(pilgram.moon(tensor2pil(img)))) 94 | elif filter_name == "nashville": 95 | tensors.append(pil2tensor(pilgram.nashville(tensor2pil(img)))) 96 | elif filter_name == "perpetua": 97 | tensors.append(pil2tensor(pilgram.perpetua(tensor2pil(img)))) 98 | elif filter_name == "reyes": 99 | tensors.append(pil2tensor(pilgram.reyes(tensor2pil(img)))) 100 | elif filter_name == "rise": 101 | tensors.append(pil2tensor(pilgram.rise(tensor2pil(img)))) 102 | elif filter_name == "slumber": 103 | tensors.append(pil2tensor(pilgram.slumber(tensor2pil(img)))) 104 | elif filter_name == "stinson": 105 | tensors.append(pil2tensor(pilgram.stinson(tensor2pil(img)))) 106 | elif filter_name == "toaster": 107 | tensors.append(pil2tensor(pilgram.stinson(tensor2pil(img)))) 108 | elif filter_name == "valencia": 109 | tensors.append(pil2tensor(pilgram.valencia(tensor2pil(img)))) 110 | elif filter_name == "walden": 111 | tensors.append(pil2tensor(pilgram.walden(tensor2pil(img)))) 112 | elif filter_name == "willow": 113 | tensors.append(pil2tensor(pilgram.willow(tensor2pil(img)))) 114 | elif filter_name == "xpro2": 115 | tensors.append(pil2tensor(pilgram.xpro2(tensor2pil(img)))) 116 | tensors = torch.cat(tensors, dim=0) 117 | return (tensors, ) 118 | else: 119 | tensors = [] 120 | for img in image: 121 | if style == "1977": 122 | tensors.append(pil2tensor(pilgram._1977(tensor2pil(img)))) 123 | elif style == "aden": 124 | tensors.append(pil2tensor(pilgram.aden(tensor2pil(img)))) 125 | elif style == "brannan": 126 | tensors.append(pil2tensor(pilgram.brannan(tensor2pil(img)))) 127 | elif style == "brooklyn": 128 | tensors.append(pil2tensor(pilgram.brooklyn(tensor2pil(img)))) 129 | elif style == "clarendon": 130 | tensors.append(pil2tensor(pilgram.clarendon(tensor2pil(img)))) 131 | elif style == "earlybird": 132 | tensors.append(pil2tensor(pilgram.earlybird(tensor2pil(img)))) 133 | elif style == "gingham": 134 | tensors.append(pil2tensor(pilgram.gingham(tensor2pil(img)))) 135 | elif style == "hudson": 136 | tensors.append(pil2tensor(pilgram.hudson(tensor2pil(img)))) 137 | elif style == "inkwell": 138 | tensors.append(pil2tensor(pilgram.inkwell(tensor2pil(img)))) 139 | elif style == "kelvin": 140 | tensors.append(pil2tensor(pilgram.kelvin(tensor2pil(img)))) 141 | elif style == "lark": 142 | tensors.append(pil2tensor(pilgram.lark(tensor2pil(img)))) 143 | elif style == "lofi": 144 | tensors.append(pil2tensor(pilgram.lofi(tensor2pil(img)))) 145 | elif style == "maven": 146 | tensors.append(pil2tensor(pilgram.maven(tensor2pil(img)))) 147 | elif style == "mayfair": 148 | tensors.append(pil2tensor(pilgram.mayfair(tensor2pil(img)))) 149 | elif style == "moon": 150 | tensors.append(pil2tensor(pilgram.moon(tensor2pil(img)))) 151 | elif style == "nashville": 152 | tensors.append(pil2tensor(pilgram.nashville(tensor2pil(img)))) 153 | elif style == "perpetua": 154 | tensors.append(pil2tensor(pilgram.perpetua(tensor2pil(img)))) 155 | elif style == "reyes": 156 | tensors.append(pil2tensor(pilgram.reyes(tensor2pil(img)))) 157 | elif style == "rise": 158 | tensors.append(pil2tensor(pilgram.rise(tensor2pil(img)))) 159 | elif style == "slumber": 160 | tensors.append(pil2tensor(pilgram.slumber(tensor2pil(img)))) 161 | elif style == "stinson": 162 | tensors.append(pil2tensor(pilgram.stinson(tensor2pil(img)))) 163 | elif style == "toaster": 164 | tensors.append(pil2tensor(pilgram.stinson(tensor2pil(img)))) 165 | elif style == "valencia": 166 | tensors.append(pil2tensor(pilgram.valencia(tensor2pil(img)))) 167 | elif style == "walden": 168 | tensors.append(pil2tensor(pilgram.walden(tensor2pil(img)))) 169 | elif style == "willow": 170 | tensors.append(pil2tensor(pilgram.willow(tensor2pil(img)))) 171 | elif style == "xpro2": 172 | tensors.append(pil2tensor(pilgram.xpro2(tensor2pil(img)))) 173 | else: 174 | tensors.append(img) 175 | tensors = torch.cat(tensors, dim=0) 176 | return (tensors, ) 177 | -------------------------------------------------------------------------------- /nodes/egtxystz.py: -------------------------------------------------------------------------------- 1 | import cv2 2 | import numpy as np 3 | import torch 4 | from PIL import Image, ImageEnhance 5 | class EGHTYSTZNode: 6 | @classmethod 7 | def INPUT_TYPES(s): 8 | return { 9 | "required": { 10 | "image": ("IMAGE",), 11 | "temperature": ("FLOAT", { 12 | "default": 0, 13 | "min": -100, 14 | "max": 100, 15 | "step": 100, 16 | "precision": 5, 17 | "display": "slider" 18 | }), 19 | "hue": ("FLOAT", { 20 | "default": 0, 21 | "min": -90, 22 | "max": 90, 23 | "step": 5, 24 | "precision": 180, 25 | "display": "slider" 26 | }), 27 | "brightness": ("FLOAT", { 28 | "default": 0, 29 | "min": -100, 30 | "max": 100, 31 | "step": 5, 32 | "precision": 200, 33 | "display": "slider" 34 | }), 35 | "contrast": ("FLOAT", { 36 | "default": 0, 37 | "min": -100, 38 | "max": 100, 39 | "step": 5, 40 | "precision": 200, 41 | "display": "slider" 42 | }), 43 | "saturation": ("FLOAT", { 44 | "default": 0, 45 | "min": -100, 46 | "max": 100, 47 | "step": 5, 48 | "precision": 200, 49 | "display": "slider" 50 | }), 51 | "gamma": ("INT", { 52 | "default": 1, 53 | "min": -0.2, 54 | "max": 2.2, 55 | "step": 0.1, 56 | "precision": 200, 57 | "display": "slider" 58 | }), 59 | "blur": ("INT", { 60 | "default": 0, 61 | "min": -200, 62 | "max": 200, 63 | "step": 1, 64 | "precision": 200, 65 | "display": "slider" 66 | }), 67 | }, 68 | } 69 | RETURN_TYPES = ("IMAGE",) 70 | FUNCTION = "color_correct" 71 | CATEGORY = "2🐕/🖼️Image/🪞Filter" 72 | def color_correct( 73 | self, 74 | image: torch.Tensor, 75 | temperature: float, 76 | hue: float, 77 | brightness: float, 78 | contrast: float, 79 | saturation: float, 80 | gamma: float, 81 | blur: float, 82 | ): 83 | batch_size, height, width, _ = image.shape 84 | result = torch.zeros_like(image) 85 | brightness /= 100 86 | contrast /= 100 87 | saturation /= 100 88 | temperature /= 100 89 | brightness = 1 + brightness 90 | contrast = 1 + contrast 91 | saturation = 1 + saturation 92 | for b in range(batch_size): 93 | tensor_image = image[b].numpy() 94 | modified_image = Image.fromarray((tensor_image * 255).astype(np.uint8)) 95 | # brightness 96 | modified_image = ImageEnhance.Brightness(modified_image).enhance(brightness) 97 | # contrast 98 | modified_image = ImageEnhance.Contrast(modified_image).enhance(contrast) 99 | modified_image = np.array(modified_image).astype(np.float32) 100 | # temperature 101 | if temperature > 0: 102 | modified_image[:, :, 0] *= 1 + temperature 103 | modified_image[:, :, 1] *= 1 + temperature * 0.4 104 | elif temperature < 0: 105 | modified_image[:, :, 2] *= 1 - temperature 106 | modified_image = np.clip(modified_image, 0, 255) / 255 107 | # gamma 108 | modified_image = np.clip(np.power(modified_image, gamma), 0, 1) 109 | # saturation 110 | hls_img = cv2.cvtColor(modified_image, cv2.COLOR_RGB2HLS) 111 | hls_img[:, :, 2] = np.clip(saturation * hls_img[:, :, 2], 0, 1) 112 | modified_image = cv2.cvtColor(hls_img, cv2.COLOR_HLS2RGB) * 255 113 | # hue 114 | hsv_img = cv2.cvtColor(modified_image, cv2.COLOR_RGB2HSV) 115 | hsv_img[:, :, 0] = (hsv_img[:, :, 0] + hue) % 360 116 | modified_image = cv2.cvtColor(hsv_img, cv2.COLOR_HSV2RGB) 117 | # blur 118 | if blur > 0: 119 | modified_image = cv2.GaussianBlur(modified_image, (blur*2+1, blur*2+1), 0) 120 | modified_image = modified_image.astype(np.uint8) 121 | modified_image = modified_image / 255 122 | modified_image = torch.from_numpy(modified_image).unsqueeze(0) 123 | result[b] = modified_image 124 | return (result,) 125 | 126 | -------------------------------------------------------------------------------- /nodes/egtxzdljjz.py: -------------------------------------------------------------------------------- 1 | from PIL import Image, ImageSequence 2 | import numpy as np 3 | import torch 4 | import os 5 | class EGJZRYTX: 6 | @classmethod 7 | def INPUT_TYPES(s): 8 | return { 9 | "required": { 10 | "file_path": ("STRING", {}), 11 | "fill_color": (["None", "white", "gray", "black"], {}), 12 | "smooth": ("BOOLEAN", {"default": True}) 13 | }, 14 | "optional": { 15 | "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}) 16 | } 17 | } 18 | RETURN_TYPES = ('IMAGE', 'MASK',) 19 | FUNCTION = "get_transparent_image" 20 | CATEGORY = "2🐕/🖼️Image" 21 | 22 | def get_transparent_image(self, file_path, smooth, seed, fill_color): 23 | try: 24 | if os.path.isdir(file_path): 25 | images = [] 26 | for filename in os.listdir(file_path): 27 | if filename.lower().endswith(('.png', '.jpg', '.jpeg', '.webp')): 28 | img_path = os.path.join(file_path, filename) 29 | image = Image.open(img_path).convert('RGBA') 30 | images.append(image) 31 | 32 | if not images: 33 | return None, None 34 | 35 | target_size = images[0].size 36 | 37 | resized_images = [] 38 | for image in images: 39 | if image.size != target_size: 40 | image = image.resize(target_size, Image.BILINEAR) 41 | resized_images.append(image) 42 | 43 | batch_images = np.stack([np.array(img) for img in resized_images], axis=0).astype(np.float32) / 255.0 44 | batch_tensor = torch.from_numpy(batch_images) 45 | 46 | mask_tensor = None 47 | 48 | return batch_tensor, mask_tensor 49 | else: 50 | file_path = file_path.strip('"') 51 | image = Image.open(file_path) 52 | if image is not None: 53 | image_rgba = image.convert('RGBA') 54 | image_rgba.save(file_path.rsplit('.', 1)[0] + '.png') 55 | 56 | mask = np.array(image_rgba.getchannel('A')).astype(np.float32) / 255.0 57 | if smooth: 58 | mask = 1.0 - mask 59 | mask_tensor = torch.from_numpy(mask)[None, None, :, :] 60 | 61 | if fill_color == 'white': 62 | image_rgba.putalpha(255) 63 | elif fill_color == 'gray': 64 | for y in range(image_rgba.height): 65 | for x in range(image_rgba.width): 66 | if image_rgba.getpixel((x, y))[3] == 0: 67 | image_rgba.putpixel((x, y), (128, 128, 128)) 68 | elif fill_color == 'black': 69 | for y in range(image_rgba.height): 70 | for x in range(image_rgba.width): 71 | if image_rgba.getpixel((x, y))[3] == 0: 72 | image_rgba.putpixel((x, y), (0, 0, 0)) 73 | elif fill_color == 'None': 74 | pass 75 | else: 76 | raise ValueError("Invalid fill color specified.") 77 | 78 | image_np = np.array(image_rgba).astype(np.float32) / 255.0 79 | image_tensor = torch.from_numpy(image_np)[None, :, :, :] 80 | 81 | return (image_tensor, mask_tensor) 82 | 83 | except Exception as e: 84 | print(f"An error occurred while processing the image for 2🐕 friendly reminders:{e}") 85 | return None, None 86 | 87 | 88 | 89 | -------------------------------------------------------------------------------- /nodes/egwbksh.py: -------------------------------------------------------------------------------- 1 | from io import BytesIO 2 | import os 3 | import sys 4 | import filecmp 5 | import shutil 6 | import __main__ 7 | 8 | python = sys.executable 9 | 10 | 11 | extentions_folder = os.path.join(os.path.dirname(os.path.realpath(__main__.__file__)), 12 | "web" + os.sep + "extensions" + os.sep + "EG_GN_NODES") 13 | javascript_folder = os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))), "js") 14 | 15 | if not os.path.exists(extentions_folder): 16 | print('Making the "web\extensions\EG_GN_NODES" folder') 17 | os.mkdir(extentions_folder) 18 | 19 | result = filecmp.dircmp(javascript_folder, extentions_folder) 20 | 21 | if result.left_only or result.diff_files: 22 | print('Update to javascripts files detected') 23 | file_list = list(result.left_only) 24 | file_list.extend(x for x in result.diff_files if x not in file_list) 25 | 26 | for file in file_list: 27 | print(f'Copying {file} to extensions folder') 28 | src_file = os.path.join(javascript_folder, file) 29 | dst_file = os.path.join(extentions_folder, file) 30 | if os.path.exists(dst_file): 31 | os.remove(dst_file) 32 | 33 | shutil.copy(src_file, dst_file) 34 | 35 | 36 | class EGWBKSH: 37 | def __init__(self): 38 | pass 39 | 40 | @classmethod 41 | def INPUT_TYPES(s): 42 | 43 | return { 44 | "required": { 45 | "text": ("STRING", {"forceInput": True}), 46 | }, 47 | "hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"}, 48 | } 49 | 50 | RETURN_TYPES = ("STRING",) 51 | RETURN_NAMES = ("text",) 52 | OUTPUT_NODE = True 53 | FUNCTION = "display_text" 54 | 55 | CATEGORY = "2🐕/🗒️Text" 56 | 57 | def display_text(self, text, prompt=None, extra_pnginfo=None): 58 | return {"ui": {"string": [text,]}, "result": (text,)} 59 | -------------------------------------------------------------------------------- /nodes/egwbpj.py: -------------------------------------------------------------------------------- 1 | import os 2 | import requests 3 | import hashlib 4 | import json 5 | import re 6 | class EGWBRYPJ: 7 | def __init__(self): 8 | pass 9 | 10 | @classmethod 11 | def INPUT_TYPES(cls): 12 | return { 13 | "required": {}, 14 | "optional": { 15 | "text1": ("STRING", {"multiline": True}), 16 | "text2": ("STRING", {"multiline": True}), 17 | "text3": ("STRING", {"multiline": True}), 18 | "text4": ("STRING", {"multiline": True}), 19 | "text5": ("STRING", {"multiline": True}), 20 | "Splicing_Characters": ("STRING", {"default": ""}), 21 | "Exclude_Characters": ("STRING", {"default": ""}), 22 | "Exclude_words": ("STRING", {"default": ""}) 23 | }, 24 | } 25 | 26 | RETURN_TYPES = ("STRING",) 27 | RETURN_NAMES = ('concatenated_text',) 28 | FUNCTION = "concatenate_text" 29 | CATEGORY = "2🐕/🗒️Text" 30 | 31 | def concatenate_text(self, text1, text2, text3, text4, text5, Splicing_Characters="", Exclude_Characters="", Exclude_words="", seed=0): 32 | texts = [text1, text2, text3, text4, text5] 33 | 34 | concatenated_text = Splicing_Characters.join(filter(None, texts)) 35 | 36 | if Exclude_Characters: 37 | exclude_chars = Exclude_Characters.split(',') 38 | exclude_chars = [char.strip() for char in exclude_chars if char.strip()] 39 | for char in exclude_chars: 40 | concatenated_text = concatenated_text.replace(char, "") 41 | 42 | if Exclude_words: 43 | exclude_words = Exclude_words.split(',') 44 | exclude_words = [word.strip() for word in exclude_words if word.strip()] 45 | for word in exclude_words: 46 | pattern = r'(? 0.5).float() 29 | 30 | if shrink_pixels > 0: 31 | 32 | eroded_mask = F.max_pool2d(binary_mask.unsqueeze(0), kernel_size=shrink_pixels+1, stride=1, padding=shrink_pixels//2) 33 | binary_mask = eroded_mask.squeeze(0) 34 | elif expand_pixels > 0: 35 | 36 | 37 | expand_kernel = torch.ones(1, 1, expand_pixels*2+1, expand_pixels*2+1).to(mask.device) / (expand_pixels*2+1)**2 38 | expanded_mask = F.conv2d(binary_mask.unsqueeze(0), expand_kernel, padding=expand_pixels) 39 | binary_mask = expanded_mask.squeeze(0) 40 | 41 | x = torch.linspace(-kernel_size // 2, kernel_size // 2, kernel_size) 42 | x_grid = x.repeat(kernel_size).view(kernel_size, kernel_size) 43 | y_grid = x_grid.t() 44 | gaussian_kernel = torch.exp(-(x_grid**2 + y_grid**2) / (2 * sigma**2)) 45 | gaussian_kernel /= gaussian_kernel.sum() 46 | 47 | kernel = gaussian_kernel.view(1, 1, kernel_size, kernel_size).repeat(1, 1, 1, 1).to(mask.device) 48 | 49 | blurred_mask = F.conv2d(binary_mask.unsqueeze(0), kernel, padding=kernel_size // 2, groups=1).squeeze(0) 50 | return (blurred_mask,) 51 | 52 | -------------------------------------------------------------------------------- /nodes/egzzcjnode.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Dict, Any 2 | import torch 3 | from PIL import Image 4 | import numpy as np 5 | from torchvision import transforms 6 | 7 | 8 | def tensor2pil(image): 9 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 10 | 11 | 12 | def pil2tensor(image): 13 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 14 | 15 | 16 | class EGTXZZCJNode: 17 | def __init__(self): 18 | pass 19 | 20 | @classmethod 21 | def INPUT_TYPES(cls): 22 | return { 23 | "required": { 24 | "original_image": ("IMAGE",), 25 | "original_mask": ("MASK",), 26 | }, 27 | "optional": { 28 | "Up": ("INT", {"default": 0, "min": 0}), 29 | "Down": ("INT", {"default": 0, "min": 0}), 30 | "Left": ("INT", {"default": 0, "min": 0}), 31 | "Right": ("INT", {"default": 0, "min": 0}), 32 | }, 33 | } 34 | 35 | RETURN_TYPES = ("IMAGE", "MASK", "COORDS") 36 | RETURN_NAMES = ("cropped_image", "cropped_mask", "Crop_data") 37 | FUNCTION = "mask_crop" 38 | CATEGORY = "2🐕/🔍Refinement processing" 39 | 40 | def mask_crop(self, original_image, original_mask, Up=0, Down=0, Left=0, Right=0): 41 | 42 | image_pil = tensor2pil(original_image) 43 | mask_pil = tensor2pil(original_mask) 44 | 45 | 46 | mask_array = np.array(mask_pil) > 0 47 | 48 | 49 | coords = np.where(mask_array) 50 | if coords[0].size == 0 or coords[1].size == 0: 51 | 52 | return (original_image, None, original_image) 53 | 54 | x0, y0, x1, y1 = coords[1].min(), coords[0].min(), coords[1].max(), coords[0].max() 55 | 56 | 57 | x0 -= Left 58 | y0 -= Up 59 | x1 += Right 60 | y1 += Down 61 | 62 | 63 | x0 = max(x0, 0) 64 | y0 = max(y0, 0) 65 | x1 = min(x1, image_pil.width) 66 | y1 = min(y1, image_pil.height) 67 | 68 | 69 | cropped_image_pil = image_pil.crop((x0, y0, x1, y1)) 70 | 71 | 72 | cropped_mask_pil = mask_pil.crop((x0, y0, x1, y1)) 73 | 74 | 75 | cropped_image_tensor = pil2tensor(cropped_image_pil) 76 | cropped_mask_tensor = pil2tensor(cropped_mask_pil) 77 | 78 | 79 | return (cropped_image_tensor, cropped_mask_tensor, (y0, y1, x0, x1)) 80 | 81 | 82 | 83 | 84 | -------------------------------------------------------------------------------- /nodes/egzzcjpj.py: -------------------------------------------------------------------------------- 1 | import torch 2 | from PIL import Image 3 | import numpy as np 4 | def tensor2pil(image): 5 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 6 | def pil2tensor(image): 7 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 8 | def resize_mask(mask_pil, target_size): 9 | return mask_pil.resize(target_size, Image.LANCZOS) 10 | def image2mask(image_pil): 11 | # Convert image to grayscale 12 | image_pil = image_pil.convert("L") 13 | # Convert grayscale image to binary mask 14 | threshold = 128 15 | mask_array = np.array(image_pil) > threshold 16 | return Image.fromarray((mask_array * 255).astype(np.uint8)) 17 | class EGZZHBCJNode: 18 | def __init__(self): 19 | pass 20 | @classmethod 21 | def INPUT_TYPES(cls): 22 | return { 23 | "required": { 24 | "operation": (["merge", "crop", "intersect", "not_intersect"], {}), 25 | }, 26 | "optional": { 27 | "target_image": ("IMAGE", {}), 28 | "target_mask": ("MASK", {}), 29 | "source_image": ("IMAGE", {}), 30 | "source_mask": ("MASK", {}), 31 | }, 32 | } 33 | RETURN_TYPES = ("MASK", "IMAGE") 34 | RETURN_NAMES = ("result_mask", "result_image") 35 | FUNCTION = "mask_operation" 36 | CATEGORY = "2🐕/⛱️Mask" 37 | def mask_operation(self, operation, source_image=None, target_image=None, source_mask=None, target_mask=None): 38 | # Convert source and target images to masks if provided 39 | if source_image is not None: 40 | source_mask_pil = tensor2pil(source_image) 41 | source_mask_pil = image2mask(source_mask_pil) 42 | else: 43 | source_mask_pil = tensor2pil(source_mask) 44 | if target_image is not None: 45 | target_mask_pil = tensor2pil(target_image) 46 | target_mask_pil = image2mask(target_mask_pil) 47 | else: 48 | target_mask_pil = tensor2pil(target_mask) 49 | # Resize source mask to target mask size 50 | source_mask_pil = resize_mask(source_mask_pil, target_mask_pil.size) 51 | source_mask_array = np.array(source_mask_pil) > 0 52 | target_mask_array = np.array(target_mask_pil) > 0 53 | if operation == "merge": 54 | result_mask_array = np.logical_or(source_mask_array, target_mask_array) 55 | elif operation == "crop": 56 | result_mask_array = np.logical_and(target_mask_array, np.logical_not(source_mask_array)) 57 | elif operation == "intersect": 58 | result_mask_array = np.logical_and(source_mask_array, target_mask_array) 59 | elif operation == "not_intersect": 60 | result_mask_array = np.logical_xor(source_mask_array, target_mask_array) 61 | else: 62 | raise ValueError("Invalid operation selected") 63 | result_mask = Image.fromarray((result_mask_array * 255).astype(np.uint8)) 64 | result_mask_tensor = pil2tensor(result_mask) 65 | result_image_tensor = pil2tensor(result_mask) 66 | return [result_mask_tensor, result_image_tensor] 67 | NODE_CLASS_MAPPINGS = { "EG_ZZHBCJ" : EGZZHBCJNode } 68 | NODE_DISPLAY_NAME_MAPPINGS = { "EG_ZZHBCJ" : "2🐕Mask can be cut arbitrarily" } 69 | -------------------------------------------------------------------------------- /nodes/egzzhsyh.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | 4 | class EGZZHSYH: 5 | def __init__(self): 6 | pass 7 | 8 | @classmethod 9 | def INPUT_TYPES(cls): 10 | return { 11 | "required": { 12 | "mask": ("MASK", {}), 13 | }, 14 | "optional": { 15 | "kernel_size": ("INT", {"default": 50, "min": 3, "max": 200, "step": 2}), 16 | "sigma": ("FLOAT", {"default": 15.0, "min": 0.1, "max": 200.0, "step": 0.1}), 17 | "shrink_pixels": ("INT", {"default": 0, "min": 0, "max": 50, "step": 1}), 18 | "expand_pixels": ("INT", {"default": 0, "min": 0, "max": 50, "step": 1}), 19 | }, 20 | } 21 | 22 | RETURN_TYPES = ("MASK",) 23 | RETURN_NAMES = ("mask",) 24 | FUNCTION = "gaussian_blur_edge" 25 | CATEGORY = "2🐕/⛱️Mask/🪶Fuzzy feathering" 26 | def gaussian_blur_edge(self, mask, kernel_size=5, sigma=1.0, shrink_pixels=0, expand_pixels=0): 27 | 28 | binary_mask = (1 - mask).float() 29 | 30 | if shrink_pixels > 0: 31 | 32 | eroded_mask = F.max_pool2d(binary_mask.unsqueeze(0), kernel_size=shrink_pixels+1, stride=1, padding=shrink_pixels//2) 33 | binary_mask = eroded_mask.squeeze(0) 34 | elif expand_pixels > 0: 35 | 36 | 37 | expand_kernel = torch.ones(1, 1, expand_pixels*2+1, expand_pixels*2+1).to(mask.device) / (expand_pixels*2+1)**2 38 | expanded_mask = F.conv2d(binary_mask.unsqueeze(0), expand_kernel, padding=expand_pixels) 39 | binary_mask = expanded_mask.squeeze(0) 40 | 41 | x = torch.linspace(-kernel_size // 2, kernel_size // 2, kernel_size) 42 | x_grid = x.repeat(kernel_size).view(kernel_size, kernel_size) 43 | y_grid = x_grid.t() 44 | gaussian_kernel = torch.exp(-(x_grid**2 + y_grid**2) / (2 * sigma**2)) 45 | gaussian_kernel /= gaussian_kernel.sum() 46 | 47 | kernel = gaussian_kernel.view(1, 1, kernel_size, kernel_size).repeat(1, 1, 1, 1).to(mask.device) 48 | 49 | blurred_mask = F.conv2d(binary_mask.unsqueeze(0), kernel, padding=kernel_size // 2, groups=1).squeeze(0) 50 | 51 | blurred_mask = 1 - blurred_mask 52 | return (blurred_mask,) 53 | 54 | -------------------------------------------------------------------------------- /nodes/egzzhtkz.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Dict, Any 2 | import torch 3 | from PIL import Image 4 | import numpy as np 5 | from torchvision import transforms 6 | from scipy.ndimage import binary_dilation, binary_erosion 7 | 8 | 9 | def tensor2pil(image): 10 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 11 | 12 | 13 | def pil2tensor(image): 14 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 15 | 16 | 17 | class EGZZKZHTNODE: 18 | def __init__(self): 19 | pass 20 | 21 | @classmethod 22 | def INPUT_TYPES(cls): 23 | return { 24 | "required": { 25 | "mask": ("MASK",), 26 | "extend_size": ("INT", { 27 | "default": 0, 28 | "min": -1000, 29 | "max": 1000, 30 | "step": 1, 31 | "display": "slider" 32 | }), 33 | }, 34 | } 35 | 36 | RETURN_TYPES = ("MASK",) 37 | RETURN_NAMES = ("mask",) 38 | FUNCTION = "mask_expand_shrink" 39 | CATEGORY = "2🐕/⛱️Mask" 40 | 41 | def mask_expand_shrink(self, mask,extend_size): 42 | mask = tensor2pil(mask) 43 | expand_shrink_value = extend_size 44 | 45 | 46 | mask_array = np.array(mask) > 0 47 | 48 | 49 | if expand_shrink_value > 0: 50 | 51 | expanded_mask_array = binary_dilation(mask_array, iterations=expand_shrink_value) 52 | elif expand_shrink_value < 0: 53 | 54 | expanded_mask_array = binary_erosion(mask_array, iterations=-expand_shrink_value) 55 | else: 56 | 57 | expanded_mask_array = mask_array 58 | 59 | 60 | expanded_mask = Image.fromarray((expanded_mask_array * 255).astype(np.uint8)) 61 | 62 | 63 | expanded_mask_tensor = pil2tensor(expanded_mask) 64 | 65 | return (expanded_mask_tensor, ) 66 | 67 | 68 | 69 | 70 | 71 | -------------------------------------------------------------------------------- /nodes/egzzkzyh.py: -------------------------------------------------------------------------------- 1 | from typing import Tuple, Dict, Any 2 | import torch 3 | from PIL import Image 4 | import numpy as np 5 | from torchvision import transforms 6 | from scipy.ndimage import binary_dilation, binary_erosion 7 | 8 | def tensor2pil(image): 9 | return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8)) 10 | 11 | def pil2tensor(image): 12 | return torch.from_numpy(np.array(image).astype(np.float32) / 255.0).unsqueeze(0) 13 | 14 | class EGZZSSKZNODE: 15 | def __init__(self): 16 | pass 17 | 18 | @classmethod 19 | def INPUT_TYPES(cls): 20 | return { 21 | "required": { 22 | "mask": ("MASK",), 23 | "extend_size": ("INT", {"default": 0, "min": -1000, "max": 1000, "step": 1}), 24 | }, 25 | } 26 | 27 | RETURN_TYPES = ("MASK",) 28 | RETURN_NAMES = ("mask",) 29 | FUNCTION = "mask_expand_shrink" 30 | CATEGORY = "2🐕/⛱️Mask" 31 | 32 | def mask_expand_shrink(self, mask, extend_size): 33 | mask = tensor2pil(mask) 34 | expand_shrink_value = extend_size 35 | 36 | mask_array = np.array(mask) > 0 37 | 38 | if expand_shrink_value > 0: 39 | expanded_mask_array = binary_dilation(mask_array, iterations=expand_shrink_value) 40 | elif expand_shrink_value < 0: 41 | expanded_mask_array = binary_erosion(mask_array, iterations=-expand_shrink_value) 42 | else: 43 | expanded_mask_array = mask_array 44 | 45 | expanded_mask = Image.fromarray((expanded_mask_array * 255).astype(np.uint8)) 46 | 47 | expanded_mask_tensor = pil2tensor(expanded_mask) 48 | 49 | return (expanded_mask_tensor, ) 50 | 51 | 52 | 53 | 54 | 55 | -------------------------------------------------------------------------------- /nodes/egzzmhnode.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | 4 | class EGZZBYYHNode: 5 | def __init__(self): 6 | pass 7 | 8 | @classmethod 9 | def INPUT_TYPES(cls): 10 | return { 11 | "required": { 12 | "mask": ("MASK", {}), 13 | }, 14 | "optional": { 15 | "Fuzzy_weight": ("INT", {"default": 50, "min": 1, "max": 1000, "step": 2}), 16 | "Blur_size": ("INT", {"default": 50, "min": 1, "max": 1000, "step": 1}), 17 | }, 18 | } 19 | 20 | RETURN_TYPES = ("MASK",) 21 | RETURN_NAMES = ("mask",) 22 | FUNCTION = "gaussian_blur_edge" 23 | CATEGORY = "2🐕/⛱️Mask/🪶Fuzzy feathering" 24 | 25 | def gaussian_blur_edge(self, mask, Fuzzy_weight=5, Blur_size=1): 26 | binary_mask = (mask > 0.5).float() 27 | 28 | sigma_float = Blur_size / 10.0 29 | 30 | kernel_size_half = Fuzzy_weight // 2 31 | x = torch.linspace(-kernel_size_half, kernel_size_half, Fuzzy_weight) 32 | x_grid = x.repeat(Fuzzy_weight).view(Fuzzy_weight, Fuzzy_weight) 33 | y_grid = x_grid.t() 34 | gaussian_kernel = torch.exp(-(x_grid**2 + y_grid**2) / (2 * sigma_float**2)) 35 | gaussian_kernel /= gaussian_kernel.sum() 36 | kernel = gaussian_kernel.view(1, 1, Fuzzy_weight, Fuzzy_weight).repeat(1, 1, 1, 1).to(mask.device) 37 | blurred_mask = F.conv2d(binary_mask.unsqueeze(0), kernel, padding=kernel_size_half, groups=1).squeeze(0) 38 | return (blurred_mask,) 39 | 40 | 41 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests 2 | torch 3 | torchvision 4 | Pillow 5 | numpy 6 | scipy 7 | scikit-image 8 | opencv-python 9 | --------------------------------------------------------------------------------