├── LICENSE
├── README.md
├── __init__.py
├── exllama.py
├── requirements.txt
├── text.js
└── text.py
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Zuellni
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ComfyUI ExLlama Nodes
2 | A simple local text generator for [ComfyUI](https://github.com/comfyanonymous/ComfyUI) using [ExLlamaV2](https://github.com/turboderp/exllamav2).
3 |
4 | ## Installation
5 | Clone the repository to `custom_nodes` and install the requirements:
6 | ```
7 | cd custom_nodes
8 | git clone https://github.com/Zuellni/ComfyUI-ExLlama-Nodes
9 | pip install -r ComfyUI-ExLlama-Nodes/requirements.txt
10 | ```
11 |
12 | Use wheels for [ExLlamaV2](https://github.com/turboderp/exllamav2/releases/latest) and [FlashAttention](https://github.com/bdashore3/flash-attention/releases/latest) on Windows:
13 | ```
14 | pip install exllamav2-X.X.X+cuXXX.torch2.X.X-cp3XX-cp3XX-win_amd64.whl
15 | pip install flash_attn-X.X.X+cuXXX.torch2.X.X-cp3XX-cp3XX-win_amd64.whl
16 | ```
17 |
18 | ## Usage
19 | Only EXL2, 4-bit GPTQ and FP16 models are supported. You can find them on [Hugging Face](https://huggingface.co).
20 | To use a model with the nodes, you should clone its repository with `git` or manually download all the files and place them in a folder in `models/llm`.
21 | For example, if you'd like to download the 4-bit [Llama-3.1-8B-Instruct](https://huggingface.co/turboderp/Llama-3.1-8B-Instruct-exl2):
22 | ```
23 | cd models
24 | mkdir llm
25 | git install lfs
26 | git clone https://huggingface.co/turboderp/Llama-3.1-8B-Instruct-exl2 -b 4.0bpw
27 | ```
28 |
29 | > [!TIP]
30 | > You can add your own `llm` path to the [extra_model_paths.yaml](https://github.com/comfyanonymous/ComfyUI/blob/master/extra_model_paths.yaml.example) file and put the models there instead.
31 |
32 | ## Nodes
33 |
34 |
35 | ExLlama Nodes |
36 |
37 |
38 | Loader |
39 | Loads models from the llm directory. |
40 |
41 |
42 | |
43 | cache_bits |
44 | A lower value reduces VRAM usage, but also affects generation speed and quality. |
45 |
46 |
47 | |
48 | flash_attention |
49 | Enabling reduces VRAM usage, not supported on cards with compute capability lower than 8.0 . |
50 |
51 |
52 | |
53 | max_seq_len |
54 | Max context, higher value equals higher VRAM usage. 0 will default to model config. |
55 |
56 |
57 | Formatter |
58 | Formats messages using the model's chat template. |
59 |
60 |
61 | |
62 | add_assistant_role |
63 | Appends assistant role to the formatted output. |
64 |
65 |
66 | Tokenizer |
67 | Tokenizes input text using the model's tokenizer. |
68 |
69 |
70 | |
71 | add_bos_token |
72 | Prepends the input with a bos token if enabled. |
73 |
74 |
75 | |
76 | encode_special_tokens |
77 | Encodes special tokens such as bos and eos if enabled, otherwise treats them as normal strings. |
78 |
79 |
80 | Settings |
81 | Optional sampler settings node. Refer to SillyTavern for parameters. |
82 |
83 |
84 | Generator |
85 | Generates text based on the given input. |
86 |
87 |
88 | |
89 | unload |
90 | Unloads the model after each generation to reduce VRAM usage. |
91 |
92 |
93 | |
94 | stop_conditions |
95 | A list of strings to stop generation on, e.g. "\n" to stop on newline. Leave empty to only stop on eos . |
96 |
97 |
98 | |
99 | max_tokens |
100 | Max new tokens to generate. 0 will use available context. |
101 |
102 |
103 | Text Nodes |
104 |
105 |
106 | Clean |
107 | Strips punctuation, fixes whitespace, and changes case for input text. |
108 |
109 |
110 | Message |
111 | A message for the Formatter node. Can be chained to create a conversation. |
112 |
113 |
114 | Preview |
115 | Displays generated text in the UI. |
116 |
117 |
118 | Replace |
119 | Replaces variable names in curly brackets, e.g. {a} , with their values. |
120 |
121 |
122 | String |
123 | A string constant. |
124 |
125 |
126 |
127 | ## Workflow
128 | An example workflow is embedded in the image below and can be opened in ComfyUI.
129 |
130 | 
131 |
132 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | from . import exllama, text
2 |
3 | NODE_CLASS_MAPPINGS = {}
4 | NODE_DISPLAY_NAME_MAPPINGS = {}
5 | WEB_DIRECTORY = "."
6 |
7 | for module in (exllama, text):
8 | NODE_CLASS_MAPPINGS.update(module.NODE_CLASS_MAPPINGS)
9 | NODE_DISPLAY_NAME_MAPPINGS.update(module.NODE_DISPLAY_NAME_MAPPINGS)
10 |
--------------------------------------------------------------------------------
/exllama.py:
--------------------------------------------------------------------------------
1 | import gc
2 | import json
3 | import random
4 | from pathlib import Path
5 | from time import time
6 |
7 | from exllamav2 import (
8 | ExLlamaV2,
9 | ExLlamaV2Cache,
10 | ExLlamaV2Cache_Q4,
11 | ExLlamaV2Cache_Q6,
12 | ExLlamaV2Cache_Q8,
13 | ExLlamaV2Config,
14 | ExLlamaV2Tokenizer,
15 | )
16 | from exllamav2.generator import (
17 | ExLlamaV2DynamicGenerator,
18 | ExLlamaV2DynamicJob,
19 | ExLlamaV2Sampler,
20 | )
21 | from jinja2 import Template
22 |
23 | from comfy.model_management import soft_empty_cache, unload_all_models
24 | from comfy.utils import ProgressBar
25 | from folder_paths import add_model_folder_path, get_folder_paths, models_dir
26 |
27 | _CATEGORY = "zuellni/exllama"
28 | _MAPPING = "ZuellniExLlama"
29 |
30 |
31 | class Loader:
32 | @classmethod
33 | def INPUT_TYPES(cls):
34 | if not cls._MODELS:
35 | add_model_folder_path("llm", str(Path(models_dir) / "llm"))
36 |
37 | for folder in get_folder_paths("llm"):
38 | for path in Path(folder).rglob("*/"):
39 | if (path / "config.json").is_file():
40 | parent = path.relative_to(folder).parent
41 | cls._MODELS[str(parent / path.name)] = path
42 |
43 | models = list(cls._MODELS.keys())
44 | caches = list(cls._CACHES.keys())
45 | default = models[0] if models else None
46 |
47 | return {
48 | "required": {
49 | "model": (models, {"default": default}),
50 | "cache_bits": (caches, {"default": 4}),
51 | "flash_attention": ("BOOLEAN", {"default": True}),
52 | "max_seq_len": (
53 | "INT",
54 | {"default": 2048, "min": 0, "max": 2**20, "step": 256},
55 | ),
56 | }
57 | }
58 |
59 | _CACHES = {
60 | 4: lambda m: ExLlamaV2Cache_Q4(m, lazy=True),
61 | 6: lambda m: ExLlamaV2Cache_Q6(m, lazy=True),
62 | 8: lambda m: ExLlamaV2Cache_Q8(m, lazy=True),
63 | 16: lambda m: ExLlamaV2Cache(m, lazy=True),
64 | }
65 | _MODELS = {}
66 | CATEGORY = _CATEGORY
67 | FUNCTION = "setup"
68 | RETURN_NAMES = ("MODEL",)
69 | RETURN_TYPES = ("EXL_MODEL",)
70 |
71 | def setup(self, model, cache_bits, flash_attention, max_seq_len):
72 | self.unload()
73 | self.cache_bits = cache_bits
74 |
75 | self.config = ExLlamaV2Config(__class__._MODELS[model])
76 | self.config.no_flash_attn = not flash_attention
77 |
78 | if max_seq_len:
79 | self.config.max_seq_len = max_seq_len
80 |
81 | if self.config.max_input_len > max_seq_len:
82 | self.config.max_input_len = max_seq_len
83 | self.config.max_attention_size = max_seq_len**2
84 |
85 | self.tokenizer = ExLlamaV2Tokenizer(self.config)
86 | return (self,)
87 |
88 | def load(self):
89 | if (
90 | hasattr(self, "model")
91 | and hasattr(self, "cache")
92 | and hasattr(self, "generator")
93 | and self.model
94 | and self.cache
95 | and self.generator
96 | ):
97 | return
98 |
99 | self.model = ExLlamaV2(self.config)
100 | self.cache = __class__._CACHES[self.cache_bits](self.model)
101 |
102 | progress = ProgressBar(len(self.model.modules))
103 | self.model.load_autosplit(self.cache, callback=lambda _, __: progress.update(1))
104 |
105 | self.generator = ExLlamaV2DynamicGenerator(
106 | model=self.model,
107 | cache=self.cache,
108 | tokenizer=self.tokenizer,
109 | paged=not self.config.no_flash_attn,
110 | )
111 |
112 | def unload(self):
113 | if hasattr(self, "model") and self.model:
114 | self.model.unload()
115 |
116 | self.model = None
117 | self.cache = None
118 | self.generator = None
119 |
120 | gc.collect()
121 | soft_empty_cache()
122 |
123 |
124 | class Formatter:
125 | @classmethod
126 | def INPUT_TYPES(cls):
127 | return {
128 | "required": {
129 | "model": ("EXL_MODEL",),
130 | "messages": ("EXL_MESSAGES",),
131 | "add_assistant_role": ("BOOLEAN", {"default": True}),
132 | }
133 | }
134 |
135 | CATEGORY = _CATEGORY
136 | FUNCTION = "format"
137 | RETURN_NAMES = ("TEXT",)
138 | RETURN_TYPES = ("STRING",)
139 |
140 | def raise_exception(self, message):
141 | raise Exception(message)
142 |
143 | def render(self, template, messages, add_assistant_role):
144 | return (
145 | template.render(
146 | add_generation_prompt=add_assistant_role,
147 | raise_exception=self.raise_exception,
148 | messages=messages,
149 | bos_token="",
150 | ),
151 | )
152 |
153 | def format(self, model, messages, add_assistant_role):
154 | template = model.tokenizer.tokenizer_config_dict["chat_template"]
155 | template = Template(template)
156 |
157 | try:
158 | return self.render(template, messages, add_assistant_role)
159 | except:
160 | system = None
161 | merged = []
162 |
163 | for message in messages:
164 | if message["role"] == "system":
165 | system = {"role": "user", "content": message["content"]}
166 | merged.append(system)
167 | elif system and message["role"] == "user":
168 | index = merged.index(system)
169 | merged[index]["content"] += "\n" + message["content"]
170 | system = None
171 | else:
172 | merged.append(message)
173 | system = None
174 |
175 | return self.render(template, merged, add_assistant_role)
176 |
177 |
178 | class Tokenizer:
179 | @classmethod
180 | def INPUT_TYPES(cls):
181 | return {
182 | "required": {
183 | "model": ("EXL_MODEL",),
184 | "text": ("STRING", {"default": "", "forceInput": True}),
185 | "add_bos_token": ("BOOLEAN", {"default": True}),
186 | "encode_special_tokens": ("BOOLEAN", {"default": True}),
187 | }
188 | }
189 |
190 | CATEGORY = _CATEGORY
191 | FUNCTION = "tokenize"
192 | RETURN_NAMES = ("TOKENS",)
193 | RETURN_TYPES = ("EXL_TOKENS",)
194 |
195 | def tokenize(self, model, text, add_bos_token, encode_special_tokens):
196 | return (
197 | model.tokenizer.encode(
198 | text=text,
199 | add_bos=add_bos_token,
200 | encode_special_tokens=encode_special_tokens,
201 | ),
202 | )
203 |
204 |
205 | class Settings:
206 | @classmethod
207 | def INPUT_TYPES(cls):
208 | return {
209 | "required": {
210 | "temperature": (
211 | "FLOAT",
212 | {"default": 1.0, "min": 0.0, "max": 10.0, "step": 0.01},
213 | ),
214 | "penalty": (
215 | "FLOAT",
216 | {"default": 1.0, "min": 1.0, "max": 10.0, "step": 0.01},
217 | ),
218 | "top_k": ("INT", {"default": 1, "min": 0, "max": 1000}),
219 | "top_p": (
220 | "FLOAT",
221 | {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01},
222 | ),
223 | "top_a": (
224 | "FLOAT",
225 | {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01},
226 | ),
227 | "min_p": (
228 | "FLOAT",
229 | {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01},
230 | ),
231 | "tfs": (
232 | "FLOAT",
233 | {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01},
234 | ),
235 | "typical": (
236 | "FLOAT",
237 | {"default": 0.0, "min": 0.0, "max": 1.0, "step": 0.01},
238 | ),
239 | "temperature_last": ("BOOLEAN", {"default": True}),
240 | }
241 | }
242 |
243 | CATEGORY = _CATEGORY
244 | FUNCTION = "set"
245 | RETURN_NAMES = ("SETTINGS",)
246 | RETURN_TYPES = ("EXL_SETTINGS",)
247 |
248 | def set(
249 | self,
250 | temperature,
251 | penalty,
252 | top_k,
253 | top_p,
254 | top_a,
255 | min_p,
256 | tfs,
257 | typical,
258 | temperature_last,
259 | ):
260 | settings = ExLlamaV2Sampler.Settings()
261 | settings.temperature = temperature
262 | settings.token_repetition_penalty = penalty
263 | settings.top_k = top_k
264 | settings.top_p = top_p
265 | settings.top_a = top_a
266 | settings.min_p = min_p
267 | settings.tfs = tfs
268 | settings.typical = typical
269 | settings.temperature_last = temperature_last
270 | return (settings,)
271 |
272 |
273 | class Generator:
274 | @classmethod
275 | def INPUT_TYPES(cls):
276 | return {
277 | "required": {
278 | "model": ("EXL_MODEL",),
279 | "tokens": ("EXL_TOKENS",),
280 | "unload": ("BOOLEAN", {"default": False}),
281 | "stop_conditions": ("STRING", {"default": r'"\n"'}),
282 | "max_tokens": ("INT", {"default": 128, "min": 0, "max": 2**20}),
283 | "seed": ("INT", {"default": 0, "min": 0, "max": 2**64 - 1}),
284 | },
285 | "optional": {"settings": ("EXL_SETTINGS",)},
286 | }
287 |
288 | CATEGORY = _CATEGORY
289 | FUNCTION = "generate"
290 | RETURN_NAMES = ("TEXT",)
291 | RETURN_TYPES = ("STRING",)
292 |
293 | def generate(
294 | self,
295 | model,
296 | tokens,
297 | unload,
298 | stop_conditions,
299 | max_tokens,
300 | seed,
301 | settings=None,
302 | ):
303 | if unload:
304 | unload_all_models()
305 | model.unload()
306 |
307 | model.load()
308 | random.seed(seed)
309 | tokens_len = tokens.shape[-1]
310 | max_len = model.config.max_seq_len - tokens_len
311 | stop = [model.tokenizer.eos_token_id]
312 |
313 | if not max_tokens or max_tokens > max_len:
314 | max_tokens = max_len
315 |
316 | if stop_conditions.strip():
317 | stop_conditions = json.loads(f"[{stop_conditions}]")
318 | stop.extend(stop_conditions)
319 |
320 | if not settings:
321 | settings = ExLlamaV2Sampler.Settings()
322 | settings = settings.greedy()
323 |
324 | job = ExLlamaV2DynamicJob(
325 | input_ids=tokens,
326 | max_new_tokens=max_tokens,
327 | stop_conditions=stop,
328 | gen_settings=settings,
329 | )
330 |
331 | progress = ProgressBar(max_tokens)
332 | model.generator.enqueue(job)
333 | start = time()
334 | eos = False
335 | chunks = []
336 | count = 0
337 |
338 | while not eos:
339 | for response in model.generator.iterate():
340 | if response["stage"] == "streaming":
341 | chunk = response.get("text", "")
342 | eos = response["eos"]
343 | chunks.append(chunk)
344 | progress.update(1)
345 | count += 1
346 |
347 | output = "".join(chunks)
348 | total = round(time() - start, 2)
349 | speed = round(count / total, 2)
350 |
351 | print(
352 | f"Output generated in {total} seconds",
353 | f"({tokens_len} context, {count} tokens, {speed}t/s)",
354 | )
355 |
356 | if unload:
357 | model.unload()
358 |
359 | return (output,)
360 |
361 |
362 | NODE_CLASS_MAPPINGS = {
363 | f"{_MAPPING}Loader": Loader,
364 | f"{_MAPPING}Formatter": Formatter,
365 | f"{_MAPPING}Tokenizer": Tokenizer,
366 | f"{_MAPPING}Settings": Settings,
367 | f"{_MAPPING}Generator": Generator,
368 | }
369 |
370 | NODE_DISPLAY_NAME_MAPPINGS = {
371 | f"{_MAPPING}Loader": "Loader",
372 | f"{_MAPPING}Formatter": "Formatter",
373 | f"{_MAPPING}Tokenizer": "Tokenizer",
374 | f"{_MAPPING}Settings": "Settings",
375 | f"{_MAPPING}Generator": "Generator",
376 | }
377 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | exllamav2>=0.1.5; platform_system == "Linux"
2 | flash-attn>=2.5.7; platform_system == "Linux"
3 |
--------------------------------------------------------------------------------
/text.js:
--------------------------------------------------------------------------------
1 | import { app } from "../../../scripts/app.js"
2 |
3 | app.registerExtension({
4 | name: "ZuellniText",
5 | async beforeRegisterNodeDef(nodeType, nodeData, app) {
6 | if (nodeData.category != "zuellni/text")
7 | return
8 |
9 | const onNodeCreated = nodeType.prototype.onNodeCreated
10 | const onExecuted = nodeType.prototype.onExecuted
11 |
12 | if (nodeData.name == "ZuellniTextPreview") {
13 | nodeType.prototype.onNodeCreated = function () {
14 | const output = this.widgets.find(w => w.name == "output")
15 |
16 | if (output) {
17 | output.inputEl.placeholder = ""
18 | output.inputEl.readOnly = true
19 | output.inputEl.style.cursor = "default"
20 | output.inputEl.style.opacity = 0.7
21 | }
22 |
23 | this.setSize(this.computeSize())
24 | return onNodeCreated?.apply(this, arguments)
25 | }
26 |
27 | nodeType.prototype.onExecuted = function (message) {
28 | const output = this.widgets.find(w => w.name == "output")
29 | output && (output.value = message.text)
30 | return onExecuted?.apply(this, arguments)
31 | }
32 | } else if (nodeData.name == "ZuellniTextReplace") {
33 | nodeType.prototype.onNodeCreated = function () {
34 | const count = this.widgets.find(w => w.name == "count")
35 |
36 | if (count) {
37 | count.callback = () => this.onChanged(count.value)
38 | this.onChanged(count.value)
39 | }
40 |
41 | return onNodeCreated?.apply(this, arguments)
42 | }
43 |
44 | nodeType.prototype.onChanged = function (count) {
45 | !this.inputs && (this.inputs = [])
46 | const current = this.inputs.length
47 |
48 | if (current == count)
49 | return
50 |
51 | if (current < count)
52 | for (let i = current; i < count; i++)
53 | this.addInput(String.fromCharCode(i + 97), "STRING")
54 | else
55 | for (let i = current - 1; i >= count; i--)
56 | this.removeInput(i)
57 | }
58 | }
59 | }
60 | })
61 |
--------------------------------------------------------------------------------
/text.py:
--------------------------------------------------------------------------------
1 | import string
2 |
3 | _CATEGORY = "zuellni/text"
4 | _MAPPING = "ZuellniText"
5 |
6 |
7 | class Clean:
8 | @classmethod
9 | def INPUT_TYPES(cls):
10 | return {
11 | "required": {
12 | "text": ("STRING", {"default": "", "forceInput": True}),
13 | "strip": (
14 | ("both", "punctuation", "whitespace", "none"),
15 | {"default": "both"},
16 | ),
17 | "case": (
18 | ("lower", "upper", "capitalize", "title", "none"),
19 | {"default": "lower"},
20 | ),
21 | "fix": ("BOOLEAN", {"default": True}),
22 | }
23 | }
24 |
25 | CATEGORY = _CATEGORY
26 | FUNCTION = "clean"
27 | RETURN_NAMES = ("TEXT",)
28 | RETURN_TYPES = ("STRING",)
29 |
30 | def clean(self, text, strip, case, fix):
31 | if strip == "both":
32 | text = text.strip(string.punctuation + string.whitespace)
33 | elif strip != "none":
34 | text = text.strip(getattr(string, strip))
35 |
36 | if case == "title":
37 | text = string.capwords(text)
38 | elif case != "none":
39 | text = getattr(text, case)()
40 |
41 | if fix:
42 | text = "\n".join([t for t in text.splitlines() if t])
43 | text = " ".join(text.split())
44 |
45 | return (text,)
46 |
47 |
48 | class Message:
49 | @classmethod
50 | def INPUT_TYPES(cls):
51 | return {
52 | "required": {
53 | "role": (("system", "user", "assistant"), {"default": "system"}),
54 | "content": ("STRING", {"default": "", "multiline": True}),
55 | },
56 | "optional": {"messages": ("EXL_MESSAGES",)},
57 | }
58 |
59 | CATEGORY = _CATEGORY
60 | FUNCTION = "add"
61 | RETURN_NAMES = ("MESSAGES",)
62 | RETURN_TYPES = ("EXL_MESSAGES",)
63 |
64 | def add(self, role, content, messages=[]):
65 | return (messages + [{"role": role, "content": content}],)
66 |
67 |
68 | class Preview:
69 | @classmethod
70 | def INPUT_TYPES(cls):
71 | return {
72 | "required": {
73 | "text": ("STRING", {"default": "", "forceInput": True}),
74 | "print_to_console": ("BOOLEAN", {"default": False}),
75 | "output": ("STRING", {"default": "", "multiline": True}),
76 | }
77 | }
78 |
79 | CATEGORY = _CATEGORY
80 | FUNCTION = "preview"
81 | OUTPUT_NODE = True
82 | RETURN_TYPES = ()
83 |
84 | def preview(self, text, print_to_console, output):
85 | print_to_console and print(text)
86 | return {"ui": {"text": [text]}}
87 |
88 |
89 | class Replace:
90 | @classmethod
91 | def INPUT_TYPES(cls):
92 | return {
93 | "required": {
94 | "count": ("INT", {"default": 1, "min": 1, "max": 26}),
95 | "text": ("STRING", {"default": "", "multiline": True}),
96 | }
97 | }
98 |
99 | CATEGORY = _CATEGORY
100 | FUNCTION = "replace"
101 | RETURN_NAMES = ("TEXT",)
102 | RETURN_TYPES = ("STRING",)
103 |
104 | def replace(self, count, text="", **kwargs):
105 | for index in range(count):
106 | key = chr(index + 97)
107 |
108 | if key in kwargs and kwargs[key]:
109 | text = text.replace(f"{{{key}}}", kwargs[key])
110 |
111 | return (text,)
112 |
113 |
114 | class String:
115 | @classmethod
116 | def INPUT_TYPES(cls):
117 | return {"required": {"text": ("STRING", {"default": "", "multiline": True})}}
118 |
119 | CATEGORY = _CATEGORY
120 | FUNCTION = "get"
121 | RETURN_NAMES = ("TEXT",)
122 | RETURN_TYPES = ("STRING",)
123 |
124 | def get(self, text):
125 | return (text,)
126 |
127 |
128 | NODE_CLASS_MAPPINGS = {
129 | f"{_MAPPING}Clean": Clean,
130 | f"{_MAPPING}Message": Message,
131 | f"{_MAPPING}Preview": Preview,
132 | f"{_MAPPING}Replace": Replace,
133 | f"{_MAPPING}String": String,
134 | }
135 |
136 | NODE_DISPLAY_NAME_MAPPINGS = {
137 | f"{_MAPPING}Clean": "Clean",
138 | f"{_MAPPING}Message": "Message",
139 | f"{_MAPPING}Preview": "Preview",
140 | f"{_MAPPING}Replace": "Replace",
141 | f"{_MAPPING}String": "String",
142 | }
143 |
144 | WEB_DIRECTORY = "."
145 |
--------------------------------------------------------------------------------