├── .github
└── workflows
│ └── publish.yml
├── README.md
├── __init__.py
├── examples
├── DownLoad New Model Workflow.png
├── Equal My Ollama Generate WorkFlow.png
├── My Ollama Delete Model WorkFlow.png
├── My Ollama Generate Advance WorkFlow-Context.png
├── My Ollama Generate WorkFlow.png
├── My Ollama Special Generate Advance WorkFlow.png
├── My Ollama Vision WorkFlow.png
├── Ollama Generate Advance WorkFlow.png
└── download.png
├── file
├── category.csv
└── saved_contexts
│ └── save_context
├── ollama_main.py
├── pyproject.toml
├── requirements.txt
└── web
└── js
└── ollamaOperation.js
/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
1 | name: Publish to Comfy registry
2 | on:
3 | workflow_dispatch:
4 | push:
5 | branches:
6 | - main
7 | - master
8 | paths:
9 | - "pyproject.toml"
10 |
11 | jobs:
12 | publish-node:
13 | name: Publish Custom Node to registry
14 | runs-on: ubuntu-latest
15 | steps:
16 | - name: Check out code
17 | uses: actions/checkout@v4
18 | - name: Publish Custom Node
19 | uses: Comfy-Org/publish-node-action@main
20 | with:
21 | ## Add your own personal access token to your Github Repository secrets and reference it here.
22 | personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}
23 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## Language
2 |
3 | - [English](#english)
4 | - [中文](#中文)
5 |
6 | ---
7 |
8 | ### English
9 |
10 | ## Update
11 |
12 | 9/17/2024 I made a major update that adjusted the way we download models and load display models, and also provided the ability to delete download models and save load contexts, Please read the whole Readme file.
13 |
14 | 5/15/2024 Add keep_alive support
15 |
16 | 7/15/2024 Add extra model name which it can be added by users.
17 |
18 | 0 means releasing the video memory immediately after loading the model, while 60m represents that the model will not be released from the video memory until 60m has passed since it was loaded.
19 |
20 | # ComfyUi-Ollama-YN
21 | This is just an integrated project. I refer to the ComfyUI-Prompt-MZ project prompt word template setup and comfyui-ollama project to make the prompt word generation more consistent with the requirements of stable diffusion
22 |
23 | The projects which I referenced are
24 |
25 | https://github.com/stavsap/comfyui-ollama
26 |
27 | https://github.com/MinusZoneAI/ComfyUI-Prompt-MZ
28 |
29 | I would like to express my special thanks to the authors of these two projects
30 |
31 | # INSTALLATION
32 |
33 | 1、Install ComfyUI
34 |
35 | 2、git clone in the custom_nodes folder inside your ComfyUI installation or download as zip and unzip the contents to custom_nodes/ComfyUi-Ollama-YN.
36 | The git git command is git clone https://github.com/wujm424606/ComfyUi-Ollama-YN.git
37 |
38 | 3、Start/restart ComfyUI
39 |
40 | # This section describes how to install ollama
41 |
42 | https://ollama.com/
43 |
44 | You can download and install ollama from this website
45 |
46 | # This section describes how to install models
47 |
48 | 1、You can choose models by https://ollama.com/brxce/stable-diffusion-prompt-generator
49 |
50 | 2、Input model name in the extra model and run
51 |
52 | 
53 |
54 |
55 | Detailed information be shown in the Tips.
56 |
57 |
58 |
59 | # Let me introduce you to the functionality of this project.
60 |
61 | First function:
62 |
63 | 
64 |
65 |
66 | This function is to deduce the prompt words according to the picture
67 |
68 | 
69 |
70 |
71 | I provide 4 default vision models. If you have a better model, you can add new model in the extra model. Click Run and it will automatically download the model.
72 | When downloading the model is completed, search and load the node again, and the newly downloaded model will be automatically loaded into the model list.
73 |
74 | Second function:
75 |
76 | 
77 |
78 |
79 | This function is a simple question and answer function.
80 |
81 | 
82 |
83 |
84 |
85 | I provide 6 default text models. If you have a better model, you can add new model in the extra model. Click Run and it will automatically download the model.
86 | When downloading the model is completed, search and load the node again, and the newly downloaded model will be automatically loaded into the model list.
87 |
88 |
89 | Third function:
90 |
91 | 
92 |
93 |
94 | This function allows the model to be embellished according to your prompt words
95 |
96 | Four function:
97 |
98 | 
99 |
100 |
101 | This function is to answer questions by contacting above
102 |
103 | Five function:
104 |
105 |
106 | 
107 |
108 |
109 | This function is to answers the question by loading the previously saved context
110 |
111 |
112 | Six function:
113 |
114 |
115 | 
116 |
117 |
118 | This function is to generate prompt words that more closely follow the stable diffusion pattern
119 |
120 | Seven function:
121 |
122 |
123 | 
124 |
125 |
126 | This function is to delete model which we downloaded.
127 |
128 |
129 | # Tips:
130 |
131 | 1、
132 |
133 | Before:
134 |
135 | This is how to download models before updating:
136 |
137 | This project involves installing model commands in CMD
138 |
139 | ollama run model_name
140 |
141 | Now:
142 |
143 | We can download model in the extra model by input model name.
144 |
145 | 
146 |
147 |
148 | And the downloading process will be shown in the cmd:
149 |
150 | This is a sign picture of a successful download completion
151 |
152 |
153 |
154 |
155 |
156 |
157 | 2、 If you previously had download models and now want to display them in the load list
158 |
159 | 1、 first you can input "ollama list" in the cmd
160 |
161 |
162 |
163 | 2、 then, you find the category.csv file in the ComfyUi-Ollama-YN\file and open it by excel or other tools.
164 |
165 |
166 |
167 | 3、 Fill in the name of the model you found under the ollama list command into the two columns of the csv file. Whether the model is filled in vision_model or text_model can be judged based on the model
168 | description on the ollama website. The visual model is the back-deduced, and the text model can answer questions.
169 |
170 |
171 |
172 | 4、 Save the csv file and open the comfyui, you can see the models you downloaded
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 | 3、 When we find the model to download on the Ollama official website, if the model name after run does not have the: annotation parameter, it will automatically mark latest when downloading, so we need to manually add ":latest" after the model name to download this type of model in the extra model
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 | 4、 After we download the model, we need to search and import it again, and the newly downloaded model will be displayed in the model list. However, don't use the right mouse button to directly reload the model, which will cause an error.
191 | The correct way is as follows:
192 |
193 |
194 |
195 |
196 |
197 |
198 | The wrong way is as follow:
199 |
200 |
201 |
202 |
203 | 5、Use the save context node to save the context into a file. If you use the load context node to load it at this time, the file will not be displayed. You need to restart comfyui to find the file
204 |
205 |
206 |
207 |
208 |
209 |
210 |
211 |
212 |
213 |
214 |
215 |
216 |
217 |
218 |
219 |
220 |
221 |
222 |
223 |
224 | ---
225 |
226 | ### 中文
227 |
228 | ## 更新
229 |
230 |
231 | 9/17/2024 我对这个现有的ollama项目进行了一次重大更新,调整了我们下载模型和加载显示模型的方式,还提供了删除下载模型和保存加载上下文的能力,详情请阅读以下文件
232 |
233 | 5/15/2024 加入keep_alive支持
234 |
235 | 7/24/2024 加入extra_model支持,使用者可以根据自己想要加载的模型再里面填写,当extra_model不为none,将根据extra_model进行加载模型
236 |
237 | 0为加载模型后即释放显存,60m代表模型在显存内存在60m后才会释放
238 |
239 | # ComfyUi-Ollama-YN
240 | 这是一个整合的项目,我在comfyui-ollama 项目的基础上参考了 ComfyUI-Prompt-MZ 项目的提示词和正则代码使生成的提示词更符合stable diffusion的格式
241 |
242 | 参考的两个项目是
243 |
244 | https://github.com/stavsap/comfyui-ollama
245 |
246 | https://github.com/MinusZoneAI/ComfyUI-Prompt-MZ
247 |
248 |
249 | 在这里我向这两个项目的作者表示由衷的感谢
250 |
251 | # 安装
252 | 1、安装ComfyUI
253 |
254 | 2、在custom_nodes 文件夹下在cmd页面输入git clone https://github.com/wujm424606/ComfyUi-Ollama-YN.git 完成项目的安装 或者 下载zip文件将文件解压到custom_nodes/ComfyUi-Ollama-YN目录
255 |
256 |
257 | 3、重启ComfyUI
258 |
259 | # 安装Ollama
260 |
261 | https://ollama.com/
262 |
263 | 
264 |
265 |
266 | 点开上面的网址,然后点击下载,然后进行安装即可
267 |
268 | # 通过ollama安装模型
269 |
270 | 0、模型安装的默认位置是C盘的用户文件夹下的.ollama文件夹里,这个下载目录可以通过设置全局变量进行修改,修改方法参考
271 | https://blog.csdn.net/Yurixu/article/details/136443395
272 |
273 | 1、安装完ollama后在后台中需要将ollama保持启动状态
274 |
275 | 
276 |
277 | 2、点开 https://ollama.com/library 网站,搜索然后选择好要安装的模型,然后输入模型名在ollama vision或许ollama generate节点的extra model里,点击运行即可完成安装
278 |
279 |
280 | 
281 |
282 |
283 | 详情看tips提示
284 |
285 |
286 |
287 | # 介绍整个插件的功能
288 |
289 | 第一个功能:
290 |
291 | 
292 |
293 |
294 | 这个功能就是推导图片的提示词
295 |
296 | 
297 |
298 |
299 | 我提供4个默认视觉模型。 如果您有更好的模型,可以在extra model中添加模型名。然后点击运行,系统会自动下载模型。模型下载完成后,再次搜索并加载该节点,新下载的模型会自动加载到模型列表中。
300 |
301 |
302 | 第二个功能:
303 |
304 | 
305 |
306 |
307 | 这个功能就是简单的问答功能.
308 |
309 | 
310 |
311 |
312 |
313 | 我提供了6个默认文本模型。 如果您有更好的模型,可以在extra model中添加新模型名。点击运行,系统会自动下载模型。模型下载完成后,再次搜索并加载该节点,新下载的模型会自动加载到模型列表中。
314 |
315 |
316 | 第三个功能:
317 |
318 | 
319 |
320 |
321 | 这个功能是可以润色提示词。 添加新模型方式,和前面一致
322 |
323 |
324 | 第四个功能:
325 |
326 | 
327 |
328 |
329 | 这个功能是根据上下文联动进行多轮回答。 添加新模型方式,和前面一致
330 |
331 | 第5个功能:
332 |
333 |
334 | 
335 |
336 |
337 | 这个功能是通过加载context文件里的内容进行回答,需要和save context节点配合使用.
338 |
339 |
340 | 第6个功能:
341 |
342 |
343 | 
344 |
345 |
346 | 这个功能是使用内置模板的方式,使模型生成的提示词格式更接近stable diffusion。但需要注意的是,由于内置了模板,所以会对一些模型无效,所以使用模型的选择上需要谨慎。 添加新模型方式,和前面一致
347 |
348 |
349 | 第7个功能:
350 |
351 |
352 | 
353 |
354 |
355 | 这个功能是删除已经下载的模型
356 |
357 |
358 | # Tips 提示:
359 |
360 | 1、
361 |
362 | 以前:
363 |
364 | 以前,我们下载模型是通过调用cmd,然后输入如下命令完成下载
365 |
366 | ollama run model_name
367 |
368 | 现在:
369 |
370 | 现在我们不需要在使用cmd输入命令下载了,直接在extra model输入要下载的模型名即可,但是模型名的输入有需要注意的点,在后面会提到
371 |
372 | 
373 |
374 |
375 | 下载过程会在comfyui的cmd里面显示:
376 |
377 | 成功下载结果如下图
378 |
379 |
380 |
381 |
382 |
383 |
384 | 2、如果你以前有下载的模型,但由于新更新导致以前下载模型无法显示在模型列表中,请参考以下方法
385 |
386 | 1、 首先打开cmd,并输入ollama list, 显示所有已下载的模型
387 |
388 |
389 |
390 | 2、 然后,在ComfyUi-Ollama-YN\文件中找到category.CSV文件,并通过Excel或其他工具打开它。
391 |
392 |
393 |
394 | 3、 将您在ollama list命令下找到的模型的名称填写到CSV文件的两列中。可以根据 Olama网站上的描述功能 判断模型是填充在vision_mode还是text_mode中。
395 | 简单来说,视觉模型是反推式的,文本模型可以回答问题
396 |
397 |
398 |
399 | 4、 保存CSV文件并打开comfyui,调用节点在模型列表里即可看到您下载的模型
400 |
401 |
402 |
403 |
404 |
405 |
406 |
407 | 3、 当我们在Ollama官方网站上找到要下载的模型时,如果run后面的模型名称没有:+参数的形式,那么下载时会自动标记为最新,所以我们需要在型号名称后手动添加“:latest”才能在extra model中下载该模型
408 |
409 |
410 |
411 |
412 |
413 |
414 |
415 |
416 |
417 | 4、 当我们下载模型完成后,需要再次搜索并导入,新下载的模型就会显示在模型列表中。但不要使用鼠标右键使用reload重新加载模型,否则会导致错误。
418 |
419 | 正确方式如下图:
420 |
421 |
422 |
423 |
424 |
425 |
426 | 错误方式如下图:
427 |
428 |
429 |
430 |
431 | 5、使用保存上下文节点将上下文保存到文件中。如果此时使用加载上下文节点加载,则不会显示该文件。您需要重新启动comfyui才能找到文件
432 |
433 |
434 |
435 |
436 |
437 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | from .ollama_main import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
2 |
3 |
4 | WEB_DIRECTORY = "./web"
5 |
6 | __all__ = ['NODE_CLASS_MAPPINGS', 'NODE_DISPLAY_NAME_MAPPINGS', "WEB_DIRECTORY"]
7 |
--------------------------------------------------------------------------------
/examples/DownLoad New Model Workflow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/DownLoad New Model Workflow.png
--------------------------------------------------------------------------------
/examples/Equal My Ollama Generate WorkFlow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/Equal My Ollama Generate WorkFlow.png
--------------------------------------------------------------------------------
/examples/My Ollama Delete Model WorkFlow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/My Ollama Delete Model WorkFlow.png
--------------------------------------------------------------------------------
/examples/My Ollama Generate Advance WorkFlow-Context.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/My Ollama Generate Advance WorkFlow-Context.png
--------------------------------------------------------------------------------
/examples/My Ollama Generate WorkFlow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/My Ollama Generate WorkFlow.png
--------------------------------------------------------------------------------
/examples/My Ollama Special Generate Advance WorkFlow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/My Ollama Special Generate Advance WorkFlow.png
--------------------------------------------------------------------------------
/examples/My Ollama Vision WorkFlow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/My Ollama Vision WorkFlow.png
--------------------------------------------------------------------------------
/examples/Ollama Generate Advance WorkFlow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/Ollama Generate Advance WorkFlow.png
--------------------------------------------------------------------------------
/examples/download.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wujm424606/ComfyUi-Ollama-YN/129df95accf94521c6de511d5149330396881edd/examples/download.png
--------------------------------------------------------------------------------
/file/category.csv:
--------------------------------------------------------------------------------
1 | vision_model,text_model
2 |
--------------------------------------------------------------------------------
/file/saved_contexts/save_context:
--------------------------------------------------------------------------------
1 | save your contexts
2 |
--------------------------------------------------------------------------------
/ollama_main.py:
--------------------------------------------------------------------------------
1 | import random
2 | import sys
3 |
4 | from ollama import Client
5 | from PIL import Image
6 | import numpy as np
7 | import base64
8 | from io import BytesIO
9 | import json
10 | import re
11 | from aiohttp import web
12 | from server import PromptServer
13 | import pandas as pd
14 | import os
15 | import subprocess
16 | import time
17 | import pickle
18 | from datetime import datetime
19 |
20 | FILE_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), "file")
21 | category_file_path = os.path.join(FILE_DIR, "category.csv")
22 |
23 |
24 | prompt_template = {
25 | "description": "",
26 | "long_prompt": "",
27 | "camera_angle_word": "",
28 | "style_words": "",
29 | "subject_words": "",
30 | "light_words": "",
31 | "environment_words": ""
32 | }
33 |
34 | @PromptServer.instance.routes.post("/ollama-YN/get_current_models")
35 | async def get_current_models(request):
36 | data = await request.json()
37 | url = data.get("url")
38 | client = Client(host=url)
39 | models = []
40 | # print("path:", os.getcwd())
41 | df = pd.read_csv(category_file_path)
42 | vision_models = df['vision_model'].tolist()
43 | # print("vision_models:", vision_models)
44 | text_models = df['text_model'].tolist()
45 | # print("text_models:", text_models)
46 | for model in client.list().get('models', []):
47 | # print("model_name:", model["name"])
48 | if model["name"] in vision_models:
49 | # print("yes")
50 | model["name"] = model["name"] + " (vision)"
51 | models.append(model["name"])
52 | elif model["name"] in text_models:
53 | # print("no")
54 | model["name"] = model["name"] + " (text)"
55 | models.append(model["name"])
56 | # print("models:", models)
57 | return web.json_response(models)
58 |
59 |
60 | def add_item_to_csv(file_path, item, column_name):
61 | df = pd.read_csv(file_path)
62 | last_index = len(df[column_name]) - 1
63 | # print("last_index:", last_index)
64 | # print("item:", item)
65 | signal = True
66 | current_index = -1
67 | if item in df[column_name].values:
68 | return
69 | for index, col in enumerate(df[column_name]):
70 | # print("index:", index)
71 | # print("col:", col)
72 | current_index = index
73 | if pd.isna(col):
74 | df.at[index, column_name] = item
75 | signal = False
76 | break
77 | if current_index == last_index and signal:
78 | df.loc[last_index+1, column_name] = item
79 | df.to_csv(file_path, index=False)
80 |
81 | def is_read_model_in_csv(file_path, item, column_name):
82 | df = pd.read_csv(file_path)
83 | for col in df[column_name]:
84 | if col == item:
85 | return True
86 | return False
87 |
88 | def delete_item_to_csv(file_path, item, column_name):
89 | df = pd.read_csv(file_path)
90 | df[column_name] = df[column_name].replace(item, np.nan)
91 | df.to_csv(file_path, index=False)
92 |
93 |
94 | def pull_model_with_progress(model):
95 | print("*" * 50)
96 | # 启动子进程
97 | process = subprocess.Popen(
98 | ['ollama', 'pull', model],
99 | stdout=subprocess.PIPE,
100 | stderr=subprocess.STDOUT,
101 | stdin=subprocess.DEVNULL,
102 | encoding='utf-8'
103 | )
104 |
105 | progress = 0
106 | current_file = ""
107 |
108 | # 逐行读取输出
109 | with process.stdout:
110 | for line in iter(process.stdout.readline, ''):
111 |
112 | match1 = re.search(
113 | r'pulling\s+(\S+)\.\.\.\s+(\d+)%',
114 | line
115 | )
116 | match2 = re.search(
117 | r"(\d+\.?\d*)\s*(B|KB|MB|GB)/(\d+\.?\d*)\s*(B|KB|MB|GB)\s+(\d+\.?\d*)\s*(B/s|KB/s|MB/s|GB/s)",
118 | line
119 | )
120 | if match1 and match2:
121 | filename = match1.group(1)
122 | progress_percent = int(match1.group(2))
123 | transferred_size = float(match2.group(1))
124 | transferred_unit = match2.group(2)
125 | total_size = float(match2.group(3))
126 | total_unit = match2.group(4)
127 | speed = float(match2.group(5))
128 | speed_unit = match2.group(6)
129 | if progress_percent > progress or current_file != filename:
130 | progress = progress_percent
131 | current_file = filename
132 | sys.stdout.write("\r")
133 | sys.stdout.write(f"Loading {filename}... Progress: {progress}% {transferred_size}{transferred_unit}/{total_size}{total_unit} {speed}{speed_unit} ")
134 | sys.stdout.flush()
135 | else:
136 | # 如果没有找到进度信息,直接打印输出
137 | sys.stdout.write("\r")
138 | sys.stdout.write(line)
139 | sys.stdout.flush()
140 | time.sleep(1)
141 |
142 | # 等待子进程完成
143 | exit_code = process.poll()
144 | if exit_code != 0:
145 | raise subprocess.CalledProcessError(exit_code, process.args)
146 |
147 | print("*" * 50)
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 | class MyOllamaVision:
156 | def __init__(self):
157 | pass
158 |
159 | @classmethod
160 | def INPUT_TYPES(s):
161 | return {
162 | "required": {
163 | "images": ("IMAGE",),
164 | "query": ("STRING", {
165 | "multiline": True,
166 | "default": "describe the image"
167 | }),
168 | "debug": (["disable", "enable"],),
169 | "url": ("STRING", {
170 | "multiline": False,
171 | "default": "http://127.0.0.1:11434"
172 | }),
173 | "model": ((), {}),
174 | "extra_model": ("STRING", {
175 | "multiline": False,
176 | "default": "none"
177 | }),
178 | "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
179 | "keep_alive": (["0", "60m"],),
180 |
181 | },
182 | }
183 |
184 | RETURN_TYPES = ("STRING",)
185 | RETURN_NAMES = ("description",)
186 | FUNCTION = "ollama_vision"
187 | CATEGORY = "My Ollama"
188 |
189 | def ollama_vision(self, images, query,seed, debug, url, keep_alive, model, extra_model):
190 | images_b64 = []
191 |
192 | for (batch_number, image) in enumerate(images):
193 | i = 255. * image.cpu().numpy()
194 | img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
195 | buffered = BytesIO()
196 | img.save(buffered, format="PNG")
197 | img_bytes = base64.b64encode(buffered.getvalue())
198 | images_b64.append(str(img_bytes, 'utf-8'))
199 |
200 | model = model.split(" (")[0]
201 |
202 | if extra_model != "none":
203 | model = extra_model
204 |
205 | if ":" not in model:
206 | model = model + ":latest"
207 |
208 |
209 |
210 | client = Client(host=url)
211 | options = {
212 | "seed": seed,
213 | }
214 |
215 | if debug == "enable":
216 | print(f"""[Ollama Vision]
217 | request query params:
218 |
219 | - query: {query}
220 | - url: {url}
221 | - model: {model}
222 | - extra_model: {extra_model}
223 | - options: {options}
224 | - keep_alive: {keep_alive}
225 |
226 | """)
227 |
228 |
229 |
230 | print(f"loading model: {model}")
231 | # status = client.pull(model)
232 | # 调用函数
233 | if not is_read_model_in_csv(category_file_path, model, "vision_model"):
234 | pull_model_with_progress(model)
235 |
236 | add_item_to_csv(category_file_path, model, "vision_model")
237 | # print(f"model loaded: {status}")
238 | response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64)
239 |
240 |
241 |
242 | return (response['response'],)
243 |
244 |
245 | class MyOllamaGenerate:
246 | def __init__(self):
247 | pass
248 |
249 | @classmethod
250 | def INPUT_TYPES(s):
251 | return {
252 | "required": {
253 | "prompt": ("STRING", {
254 | "multiline": True,
255 | "default": "What is Art?"
256 | }),
257 | "debug": (["disable", "enable"],),
258 | "url": ("STRING", {
259 | "multiline": False,
260 | "default": "http://127.0.0.1:11434"
261 | }),
262 | "model": ((), {}),
263 | "extra_model": ("STRING", {
264 | "multiline": False,
265 | "default": "none"
266 | }),
267 | "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
268 | "keep_alive": (["0", "60m"],),
269 | },
270 | }
271 |
272 | RETURN_TYPES = ("STRING",)
273 | RETURN_NAMES = ("response",)
274 | FUNCTION = "ollama_generate"
275 | CATEGORY = "My Ollama"
276 |
277 | def ollama_generate(self, prompt, debug, url, model, seed, keep_alive, extra_model):
278 |
279 | model = model.split(" (")[0]
280 |
281 | if extra_model != "none":
282 | model = extra_model
283 |
284 | if ":" not in model:
285 | model = model + ":latest"
286 |
287 |
288 |
289 | client = Client(host=url)
290 |
291 | options = {
292 | "seed": seed,
293 | }
294 |
295 | if not is_read_model_in_csv(category_file_path, model, "text_model"):
296 | pull_model_with_progress(model)
297 |
298 | add_item_to_csv(category_file_path, model, "text_model")
299 |
300 | response = client.generate(model=model, prompt=prompt, options=options, keep_alive=keep_alive)
301 |
302 | if debug == "enable":
303 | print(f"""[Ollama Generate]
304 | request query params:
305 |
306 | - prompt: {prompt}
307 | - url: {url}
308 | - model: {model}
309 |
310 | """)
311 |
312 | print(f"""\n[Ollama Generate]
313 | response:
314 |
315 | - model: {response["model"]}
316 | - created_at: {response["created_at"]}
317 | - done: {response["done"]}
318 | - eval_duration: {response["eval_duration"]}
319 | - load_duration: {response["load_duration"]}
320 | - eval_count: {response["eval_count"]}
321 | - eval_duration: {response["eval_duration"]}
322 | - prompt_eval_duration: {response["prompt_eval_duration"]}
323 |
324 | - response: {response["response"]}
325 |
326 | - context: {response["context"]}
327 |
328 | - options : {options}
329 | - keep_alive: {keep_alive}
330 |
331 |
332 | """)
333 |
334 | return (response['response'],)
335 |
336 | # https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion
337 |
338 | class MyOllamaGenerateAdvance:
339 | def __init__(self):
340 | pass
341 |
342 | @classmethod
343 | def INPUT_TYPES(s):
344 | seed = random.randint(1, 2 ** 31)
345 | return {
346 | "required": {
347 | "prompt": ("STRING", {
348 | "multiline": True,
349 | "default": "1个女孩在森林里散步"
350 | }),
351 | "debug": (["disable", "enable"],),
352 | "url": ("STRING", {
353 | "multiline": False,
354 | "default": "http://127.0.0.1:11434"
355 | }),
356 | # "model": ("STRING", {
357 | # "multiline": False,
358 | # "default": (["llama3:8b-instruct-q4_K_M", "llama3", "phi3:instruct", "phi3"],)
359 | # }),
360 | "model": ((), {}),
361 | "extra_model": ("STRING", {
362 | "multiline": False,
363 | "default": "none"
364 | }),
365 | "system": ("STRING", {
366 | "multiline": True,
367 | "default": "You are creating a prompt for Stable Diffusion to generate an image. First step: understand the input and generate a text prompt for the input. Second step: only respond in English with the prompt itself in phrase, but embellish it as needed but keep it under 200 tokens.",
368 | "title":"system"
369 | }),
370 | "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
371 | "top_k": ("FLOAT", {"default": 40, "min": 0, "max": 100, "step": 1}),
372 | "top_p": ("FLOAT", {"default": 0.9, "min": 0, "max": 1, "step": 0.05}),
373 | "temperature": ("FLOAT", {"default": 0.5, "min": 0, "max": 1, "step": 0.05}),
374 | "num_predict": ("FLOAT", {"default": -1, "min": -2, "max": 2048, "step": 1}),
375 | "tfs_z": ("FLOAT", {"default": 1, "min": 1, "max": 1000, "step": 0.05}),
376 | "keep_alive": (["0", "60m"],),
377 | },"optional": {
378 | "context": ("STRING", {"forceInput": True}),
379 | }
380 | }
381 |
382 | RETURN_TYPES = ("STRING","STRING",)
383 | RETURN_NAMES = ("response","context",)
384 | FUNCTION = "ollama_generate_advance"
385 | CATEGORY = "My Ollama"
386 |
387 | def ollama_generate_advance(self, prompt, debug, url, model, extra_model, system, seed, top_k, top_p,temperature,num_predict,tfs_z, keep_alive, context=None):
388 |
389 | model = model.split(" (")[0]
390 |
391 | if extra_model != "none":
392 | model = extra_model
393 |
394 | if ":" not in model:
395 | model = model + ":latest"
396 |
397 |
398 |
399 | client = Client(host=url)
400 |
401 | # num_keep: int
402 | # seed: int
403 | # num_predict: int
404 | # top_k: int
405 | # top_p: float
406 | # tfs_z: float
407 | # typical_p: float
408 | # repeat_last_n: int
409 | # temperature: float
410 | # repeat_penalty: float
411 | # presence_penalty: float
412 | # frequency_penalty: float
413 | # mirostat: int
414 | # mirostat_tau: float
415 | # mirostat_eta: float
416 | # penalize_newline: bool
417 | # stop: Sequence[str]
418 |
419 | options = {
420 | "seed": seed,
421 | "top_k":top_k,
422 | "top_p":top_p,
423 | "temperature":temperature,
424 | "num_predict":num_predict,
425 | "tfs_z":tfs_z,
426 | }
427 |
428 | print("advance_model: ", model)
429 |
430 | if not is_read_model_in_csv(category_file_path, model, "text_model"):
431 | pull_model_with_progress(model)
432 |
433 | add_item_to_csv(category_file_path, model, "text_model")
434 |
435 | response = client.generate(model=model, system=system, prompt=prompt, keep_alive=keep_alive, context=context, options=options)
436 |
437 | if debug == "enable":
438 | print(f"""[Ollama Generate Advance]
439 | request query params:
440 |
441 | - prompt: {prompt}
442 | - url: {url}
443 | - model: {model}
444 | - extra_model: {extra_model}
445 | - keep_alive: {keep_alive}
446 | - options: {options}
447 | """)
448 |
449 | print(f"""\n[Ollama Generate Advance]
450 | response:
451 |
452 | - model: {response["model"]}
453 | - created_at: {response["created_at"]}
454 | - done: {response["done"]}
455 | - eval_duration: {response["eval_duration"]}
456 | - load_duration: {response["load_duration"]}
457 | - eval_count: {response["eval_count"]}
458 | - eval_duration: {response["eval_duration"]}
459 | - prompt_eval_duration: {response["prompt_eval_duration"]}
460 |
461 | - response: {response["response"]}
462 |
463 | - context: {response["context"]}
464 |
465 | """)
466 | return (response['response'],response['context'],)
467 |
468 | class MyOllamaSpecialGenerateAdvance:
469 | def __init__(self):
470 | pass
471 | #The default system prompt referenced https://github.com/MinusZoneAI/ComfyUI-Prompt-MZ project
472 | @classmethod
473 | def INPUT_TYPES(s):
474 | seed = random.randint(1, 2 ** 31)
475 | return {
476 | "required": {
477 | "prompt": ("STRING", {
478 | "multiline": True,
479 | "default": f"一个小女孩在森林里"
480 | }),
481 | "debug": (["disable", "enable"],),
482 | "style_categories": (["none", "high quality", "photography", "illustration"],),
483 | "url": ("STRING", {
484 | "multiline": False,
485 | "default": "http://127.0.0.1:11434"
486 | }),
487 | # "model": ("STRING", {
488 | # "multiline": False,
489 | # "default": (["llama3:8b-instruct-q4_K_M", "llama3", "phi3:instruct", "phi3"],)
490 | # }),
491 | "model": ((), {}),
492 | "extra_model": ("STRING", {
493 | "multiline": False,
494 | "default": "none"
495 | }),
496 | "system": ("STRING", {
497 | "multiline": True,
498 | "default": """Stable Diffusion is an AI art generation model similar to DALLE-2.
499 | Below is a list of prompts that can be used to generate images with Stable Diffusion:
500 | - portait of a homer simpson archer shooting arrow at forest monster, front game card, drark, marvel comics, dark, intricate, highly detailed, smooth, artstation, digital illustration by ruan jia and mandy jurgens and artgerm and wayne barlowe and greg rutkowski and zdislav beksinski
501 | - pirate, concept art, deep focus, fantasy, intricate, highly detailed, digital painting, artstation, matte, sharp focus, illustration, art by magali villeneuve, chippy, ryan yee, rk post, clint cearley, daniel ljunggren, zoltan boros, gabor szikszai, howard lyon, steve argyle, winona nelson
502 | - ghost inside a hunted room, art by lois van baarle and loish and ross tran and rossdraws and sam yang and samdoesarts and artgerm, digital art, highly detailed, intricate, sharp focus, Trending on Artstation HQ, deviantart, unreal engine 5, 4K UHD image
503 | - red dead redemption 2, cinematic view, epic sky, detailed, concept art, low angle, high detail, warm lighting, volumetric, godrays, vivid, beautiful, trending on artstation, by jordan grimmer, huge scene, grass, art greg rutkowski
504 | - a fantasy style portrait painting of rachel lane / alison brie hybrid in the style of francois boucher oil painting unreal 5 daz. rpg portrait, extremely detailed artgerm greg rutkowski alphonse mucha greg hildebrandt tim hildebrandt
505 | - athena, greek goddess, claudia black, art by artgerm and greg rutkowski and magali villeneuve, bronze greek armor, owl crown, d & d, fantasy, intricate, portrait, highly detailed, headshot, digital painting, trending on artstation, concept art, sharp focus, illustration
506 | - closeup portrait shot of a large strong female biomechanic woman in a scenic scifi environment, intricate, elegant, highly detailed, centered, digital painting, artstation, concept art, smooth, sharp focus, warframe, illustration, thomas kinkade, tomasz alen kopera, peter mohrbacher, donato giancola, leyendecker, boris vallejo
507 | - ultra realistic illustration of steve urkle as the hulk, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha
508 | I want you to write me a list of detailed prompts exactly about the idea written after IDEA. Follow the structure of the example prompts. This means a very short description of the scene, followed by modifiers divided by commas to alter the mood, style, lighting, and more.
509 | Please generate the long prompt version of the short one according to the given examples. Long prompt version should consist of 3 to 5 sentences. Long prompt version must sepcify the color, shape, texture or spatial relation of the included objects. DO NOT generate sentences that describe any atmosphere!!!""",
510 | "title":"system"
511 | }),
512 | "seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
513 | "top_k": ("FLOAT", {"default": 40, "min": 0, "max": 100, "step": 1}),
514 | "top_p": ("FLOAT", {"default": 0.9, "min": 0, "max": 1, "step": 0.05}),
515 | "temperature": ("FLOAT", {"default": 0.5, "min": 0, "max": 1, "step": 0.05}),
516 | "num_predict": ("FLOAT", {"default": -1, "min": -2, "max": 2048, "step": 1}),
517 | "tfs_z": ("FLOAT", {"default": 1, "min": 1, "max": 1000, "step": 0.05}),
518 | "keep_alive": (["0", "60m"],),
519 | }
520 | }
521 |
522 | RETURN_TYPES = ("STRING", "STRING", "STRING", "STRING", "STRING", "STRING", "STRING", "STRING", "STRING")
523 | RETURN_NAMES = ("total_response", "description", "long_prompt", "camera_angle_word", "style_words", "subject_words", "light_words", "environment_words", "style_categories")
524 | FUNCTION = "ollama_generate_advance"
525 | CATEGORY = "My Ollama"
526 |
527 | def ollama_generate_advance(self, prompt, debug, style_categories, url, model, extra_model, system, seed,top_k, top_p,temperature,num_predict,tfs_z, keep_alive):
528 |
529 | model = model.split(" (")[0]
530 |
531 | if extra_model != "none":
532 | model = extra_model
533 |
534 | if ":" not in model:
535 | model = model + ":latest"
536 |
537 | prompt = f"{prompt} \nUse the following template: {json.dumps(prompt_template)}."
538 | client = Client(host=url)
539 |
540 | # num_keep: int
541 | # seed: int
542 | # num_predict: int
543 | # top_k: int
544 | # top_p: float
545 | # tfs_z: float
546 | # typical_p: float
547 | # repeat_last_n: int
548 | # temperature: float
549 | # repeat_penalty: float
550 | # presence_penalty: float
551 | # frequency_penalty: float
552 | # mirostat: int
553 | # mirostat_tau: float
554 | # mirostat_eta: float
555 | # penalize_newline: bool
556 | # stop: Sequence[str]
557 |
558 | options = {
559 | "seed": seed,
560 | "top_k":top_k,
561 | "top_p":top_p,
562 | "temperature":temperature,
563 | "num_predict":num_predict,
564 | "tfs_z":tfs_z,
565 | }
566 |
567 | if not is_read_model_in_csv(category_file_path, model, "text_model"):
568 | pull_model_with_progress(model)
569 |
570 | add_item_to_csv(category_file_path, model, "text_model")
571 |
572 | response = client.generate(model=model, system=system, prompt=prompt, context=None, keep_alive=keep_alive, options=options)
573 |
574 | if debug == "enable":
575 | print(f"""[Ollama Special Generate Advance]
576 | request query params:
577 |
578 | - prompt: {prompt}
579 | - url: {url}
580 | - model: {model}
581 | - extra_model: {extra_model}
582 | - keep_alive: {keep_alive}
583 | - options: {options}
584 | """)
585 |
586 | print(f"""\n[Ollama Generate Advance]
587 | response:
588 |
589 | - model: {response["model"]}
590 | - created_at: {response["created_at"]}
591 | - done: {response["done"]}
592 | - eval_duration: {response["eval_duration"]}
593 | - load_duration: {response["load_duration"]}
594 | - eval_count: {response["eval_count"]}
595 | - eval_duration: {response["eval_duration"]}
596 | - prompt_eval_duration: {response["prompt_eval_duration"]}
597 |
598 | - response: {response["response"]}
599 |
600 |
601 | """)
602 |
603 |
604 | #The regular expression code referenced https://github.com/MinusZoneAI/ComfyUI-Prompt-MZ project
605 |
606 | corresponding_prompt = dict()
607 | description = re.findall(
608 | r"\s*\**\s*\"*[Dd][Ee][Ss][Cc][Rr][Ii][Pp][Tt][Ii][Oo][Nn]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
609 | response["response"])
610 | if len(description) == 0:
611 | description = ""
612 | else:
613 | description = description[0]
614 |
615 |
616 | long_prompt = re.findall(
617 | r"\s*\**\s*\"*[Ll][Oo][Nn][Gg][ _][Pp][Rr][Oo][Mm][Pp][Tt]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
618 | response["response"])
619 | if len(long_prompt) == 0:
620 | long_prompt = ""
621 | else:
622 | long_prompt = long_prompt[0]
623 | # main_color_word = re.findall(
624 | # r"\s*\**\s*\"*[Mm][Aa][Ii][Nn][ _][Cc][Oo][Ll][Oo][Rr][ _][Ww][Oo][Rr][Dd]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
625 | # response["response"])
626 | camera_angle_word = re.findall(
627 | r"\s*\**\s*\"*[Cc][Aa][Mm][Ee][Rr][Aa][ _][Aa][Nn][Gg][Ll][Ee][ _][Ww][Oo][Rr][Dd]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
628 | response["response"])
629 | if len(camera_angle_word) == 0:
630 | camera_angle_word = ""
631 | else:
632 | camera_angle_word = camera_angle_word[0]
633 |
634 |
635 | style_words = re.findall(
636 | r"\s*\**\s*\"*[Ss][Tt][Yy][Ll][Ee][ _][Ww][Oo][Rr][Dd][Ss]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
637 | response["response"])
638 | if len(style_words) == 0:
639 | style_words = ""
640 | else:
641 | style_words = style_words[0]
642 |
643 |
644 | subject_words = re.findall(
645 | r"\s*\**\s*\"*[Ss][Uu][Bb][Jj][Ee][Cc][Tt][ _][Ww][Oo][Rr][Dd][Ss]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
646 | response["response"])
647 | if len(subject_words) == 0:
648 | subject_words = ""
649 | else:
650 | subject_words = subject_words[0]
651 |
652 |
653 | light_words = re.findall(
654 | r"\s*\**\s*\"*[Ll][Ii][Gg][Hh][Tt][ _][Ww][Oo][Rr][Dd][Ss]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
655 | response["response"])
656 | if len(light_words) == 0:
657 | light_words = ""
658 | else:
659 | light_words = light_words[0]
660 |
661 |
662 | environment_words = re.findall(
663 | r"\s*\**\s*\"*[Ee][Nn][Vv][Ii][Rr][Oo][Nn][Mm][Ee][Nn][Tt][ _][Ww][Oo][Rr][Dd][Ss]\"*\s*\**\s*:\s*\**\s*\"*\s*\"*([^*\n\"]+)",
664 | response["response"])
665 | if len(environment_words) == 0:
666 | environment_words = ""
667 | else:
668 | environment_words = environment_words[0]
669 |
670 |
671 |
672 | corresponding_prompt["description"] = description
673 | corresponding_prompt["long_prompt"] = long_prompt
674 | # corresponding_prompt["main_color_word"] = main_color_word
675 | corresponding_prompt["camera_angle_word"] = camera_angle_word
676 | corresponding_prompt["style_words"] = style_words
677 | corresponding_prompt["subject_words"] = subject_words
678 | corresponding_prompt["light_words"] = light_words
679 | corresponding_prompt["environment_words"] = environment_words
680 |
681 | main_components= ["description", "long_prompt", "main_color_word", "camera_angle_word", "style_words", "subject_words", "light_words", "environment_words"]
682 |
683 | full_responses = []
684 | output_description = ""
685 | output_long_prompt = ""
686 | output_camera_angle_word = ""
687 | output_style_words = ""
688 | output_subject_words = ""
689 | output_light_words = ""
690 | output_environment_words = ""
691 |
692 | if "description" in main_components:
693 | if corresponding_prompt["description"] != "":
694 | full_responses.append(f"({corresponding_prompt['description']})")
695 | output_description = corresponding_prompt['description']
696 | if "long_prompt" in main_components:
697 | if corresponding_prompt["long_prompt"] != "":
698 | full_responses.append(f"({corresponding_prompt['long_prompt']})")
699 | output_long_prompt = corresponding_prompt['long_prompt']
700 | # if "main_color_word" in main_components:
701 | # if corresponding_prompt["main_color_word"] != "":
702 | # full_responses.append(f"({corresponding_prompt['main_color_word']})")
703 | if "camera_angle_word" in main_components:
704 | if corresponding_prompt["camera_angle_word"] != "":
705 | full_responses.append(f"({corresponding_prompt['camera_angle_word']})")
706 | output_camera_angle_word = corresponding_prompt['camera_angle_word']
707 |
708 |
709 |
710 | if "style_words" in main_components:
711 | corresponding_prompt["style_words"] = [x.strip() for x in corresponding_prompt["style_words"].split(",") if
712 | x != ""]
713 | # print(corresponding_prompt["style_words"])
714 | if len(corresponding_prompt["style_words"]) > 0:
715 | full_responses.append(f"({', '.join(corresponding_prompt['style_words'])})")
716 | output_style_words = ', '.join(corresponding_prompt['style_words'])
717 |
718 |
719 | if "subject_words" in main_components:
720 | corresponding_prompt["subject_words"] = [x.strip() for x in corresponding_prompt["subject_words"].split(",") if
721 | x != ""]
722 | if len(corresponding_prompt["subject_words"]) > 0:
723 | full_responses.append(f"({', '.join(corresponding_prompt['subject_words'])})")
724 | output_subject_words = ', '.join(corresponding_prompt['subject_words'])
725 |
726 | if "light_words" in main_components:
727 | corresponding_prompt["light_words"] = [x.strip() for x in corresponding_prompt["light_words"].split(",") if
728 | x != ""]
729 | if len(corresponding_prompt["light_words"]) > 0:
730 | full_responses.append(f"({', '.join(corresponding_prompt['light_words'])})")
731 | output_light_words = ', '.join(corresponding_prompt['light_words'])
732 |
733 | if "environment_words" in main_components:
734 | corresponding_prompt["environment_words"] = [x.strip() for x in
735 | corresponding_prompt["environment_words"].split(",") if x != ""]
736 | if len(corresponding_prompt["environment_words"]) > 0:
737 | full_responses.append(f"({', '.join(corresponding_prompt['environment_words'])})")
738 | output_environment_words = ', '.join(corresponding_prompt['environment_words'])
739 |
740 | full_response = ", ".join(full_responses)
741 |
742 |
743 | # 去除换行
744 | while full_response.find("\n") != -1:
745 | full_response = full_response.replace("\n", " ")
746 |
747 | # 句号换成逗号
748 | while full_response.find(".") != -1:
749 | full_response = full_response.replace(".", ",")
750 |
751 | # 去除分号
752 | while full_response.find(";") != -1:
753 | full_response = full_response.replace(";", ",")
754 |
755 | # 去除多余逗号
756 | while full_response.find(",,") != -1:
757 | full_response = full_response.replace(",,", ",")
758 | while full_response.find(", ,") != -1:
759 | full_response = full_response.replace(", ,", ",")
760 |
761 |
762 | high_quality_prompt = "((high quality:1.4), (best quality:1.4), (masterpiece:1.4), (8K resolution), (2k wallpaper))"
763 | style_presets_prompt = {
764 | "none": "",
765 | "high quality": high_quality_prompt,
766 | "photography": f"{high_quality_prompt}, (RAW photo, best quality), (realistic, photo-realistic:1.2), (bokeh, cinematic shot, dynamic composition, incredibly detailed, sharpen, details, intricate detail, professional lighting, film lighting, 35mm, anamorphic, lightroom, cinematography, bokeh, lens flare, film grain, HDR10, 8K)",
767 | "illustration": f"{high_quality_prompt}, ((detailed matte painting, intricate detail, splash screen, complementary colors), (detailed),(intricate details),illustration,an extremely delicate and beautiful,ultra-detailed,highres,extremely detailed)",
768 | }
769 |
770 | output_style_categories = ""
771 |
772 | if style_categories == "none":
773 | full_response = f"{full_response}"
774 | elif style_categories == "high quality":
775 | style = style_presets_prompt["high quality"]
776 | full_response = f"{full_response}, {style}"
777 | output_style_categories = style[1:-1]
778 | elif style_categories == "photography":
779 | style = style_presets_prompt["photography"]
780 | full_response = f"{full_response}, {style}"
781 | output_style_categories = style[1:-1]
782 | elif style_categories == "illustration":
783 | style = style_presets_prompt["illustration"]
784 | full_response = f"{full_response}, {style}"
785 | output_style_categories = style[1:-1]
786 |
787 | return (full_response, output_description, output_long_prompt, output_camera_angle_word, output_style_words, output_subject_words, output_light_words, output_environment_words, output_style_categories,)
788 |
789 |
790 | class MyOllamaDeleteModel:
791 | def __init__(self):
792 | pass
793 |
794 | @classmethod
795 | def INPUT_TYPES(s):
796 | return {
797 | "required": {
798 | "url": ("STRING", {
799 | "multiline": False,
800 | "default": "http://127.0.0.1:11434"
801 | }),
802 | "model": ((), {}),
803 | },
804 | }
805 |
806 | RETURN_TYPES = ("STRING",)
807 | RETURN_NAMES = ("description",)
808 | FUNCTION = "ollama_delete_model"
809 | CATEGORY = "My Ollama"
810 |
811 | def ollama_delete_model(self, url, model):
812 | category = model.split(" (")[1][:-1]
813 | model = model.split(" (")[0]
814 | if category == "vision":
815 | delete_item_to_csv(category_file_path, model, "vision_model")
816 | elif category == "text":
817 | delete_item_to_csv(category_file_path, model, "text_model")
818 | client = Client(host=url)
819 | response = client.delete(model=model)
820 |
821 | return (response["status"],)
822 |
823 | class MyOllamaSaveContext:
824 | def __init__(self):
825 | self._base_dir = FILE_DIR + os.path.sep + "saved_contexts"
826 |
827 | @classmethod
828 | def INPUT_TYPES(s):
829 | return {"required":
830 | {"context": ("STRING", {"forceInput": True},),
831 | "filename": ("STRING", {"default": "context"})},
832 | }
833 |
834 | RETURN_TYPES = ()
835 | FUNCTION = "ollama_save_context"
836 |
837 | OUTPUT_NODE = True
838 | CATEGORY = "My Ollama"
839 |
840 | def ollama_save_context(self, filename, context=None):
841 | # print("context:", context)
842 | # print("list context:", list(context))
843 | # print("type:", type(context))
844 | now = datetime.now()
845 | format_now = now.strftime("_%Y-%m-%d_%H-%M-%S")
846 | path = self._base_dir + os.path.sep + filename + format_now+".pickle"
847 | with open(path, "wb") as f:
848 | pickle.dump(context, f)
849 |
850 | return {"content": {"context": context}}
851 |
852 | class MyOllamaLoadContext:
853 | def __init__(self):
854 | self._base_dir = FILE_DIR + os.path.sep + "saved_contexts"
855 |
856 | @classmethod
857 | def INPUT_TYPES(s):
858 | input_dir = FILE_DIR + os.path.sep + "saved_contexts"
859 | files = [ file for file in os.listdir(input_dir) if os.path.isfile(os.path.join(input_dir, file)) and os.path.splitext(file)[1] == ".pickle"]
860 | return {"required":
861 | {"context_file": (files, {})}
862 |
863 | }
864 |
865 |
866 | RETURN_TYPES = ("STRING",)
867 | RETURN_NAMES = ("context",)
868 | FUNCTION = "ollama_load_context"
869 |
870 | CATEGORY = "My Ollama"
871 |
872 | def ollama_load_context(self, context_file):
873 | path = self._base_dir + os.path.sep + context_file
874 | with open(path, "rb") as f:
875 | context = pickle.load(f)
876 | print(type(context))
877 | # print("context111:", context)
878 | return (context,)
879 |
880 |
881 |
882 | NODE_CLASS_MAPPINGS = {
883 | "MyOllamaVision": MyOllamaVision,
884 | "MyOllamaGenerate": MyOllamaGenerate,
885 | "MyOllamaGenerateAdvance": MyOllamaGenerateAdvance,
886 | "MyOllamaSpecialGenerateAdvance": MyOllamaSpecialGenerateAdvance,
887 | "MyOllamaDeleteModel": MyOllamaDeleteModel,
888 | "MyOllamaSaveContext": MyOllamaSaveContext,
889 | "MyOllamaLoadContext": MyOllamaLoadContext,
890 | }
891 |
892 | NODE_DISPLAY_NAME_MAPPINGS = {
893 | "MyOllamaVision": "My Ollama Vision",
894 | "MyOllamaGenerate": "My Ollama Generate",
895 | "MyOllamaGenerateAdvance": "My Ollama Generate Advance",
896 | "MyOllamaSpecialGenerateAdvance": "My Ollama Special Generate Advance",
897 | "MyOllamaDeleteModel": "My Ollama Delete Model",
898 | "MyOllamaSaveContext": "My Ollama Save Context",
899 | "MyOllamaLoadContext": "My Ollama Load Context",
900 | }
901 |
902 |
903 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "comfyui-ollama-yn"
3 | description = "Custom ComfyUI Nodes for interacting with [a/Ollama](https://ollama.com/) using the [a/ollama python client](https://github.com/ollama/ollama-python).\n Meanwhile it will provide better prompt descriptor for stable diffusion."
4 | version = "1.1.0"
5 | license = "LICENSE"
6 | dependencies = ["ollama", "pandas", "aiohttp"]
7 |
8 | [project.urls]
9 | Repository = "https://github.com/wujm424606/ComfyUi-Ollama-YN"
10 | # Used by Comfy Registry https://comfyregistry.org
11 |
12 | [tool.comfy]
13 | PublisherId = "wujm424606"
14 | DisplayName = "ComfyUi-Ollama-YN"
15 | Icon = ""
16 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | ollama
2 | pandas
3 | aiohttp
4 |
--------------------------------------------------------------------------------
/web/js/ollamaOperation.js:
--------------------------------------------------------------------------------
1 | // mport { app } from "../../../../web/scripts/app.js";
2 | import { app } from "/scripts/app.js";
3 |
4 | app.registerExtension({
5 | name: "Comfy.ollamaCommand",
6 | // 在注册节点之前执行
7 | async beforeRegisterNodeDef(nodeType, nodeData, app) {
8 | const fetchModels = async (url) => {
9 | try {
10 | const response = await fetch("/ollama-YN/get_current_models", {
11 | method: "POST",
12 | headers: {
13 | "Content-Type": "application/json",
14 | },
15 | body: JSON.stringify({
16 | url,
17 | }),
18 | });
19 |
20 | if (response.ok) {
21 | const models = await response.json();
22 | console.debug("Fetched models:", models);
23 | return models;
24 | } else {
25 | console.error(`Failed to fetch models: ${response.status}`);
26 | return [];
27 | }
28 | } catch (error) {
29 | console.error(`Error fetching models`, error);
30 | return [];
31 | }
32 | };
33 |
34 | const dummy = async () => {
35 | // calling async method will update the widgets with actual value from the browser and not the default from Node definition.
36 | }
37 |
38 |
39 | if (["MyOllamaGenerate", "MyOllamaGenerateAdvance", "MyOllamaSpecialGenerateAdvance"].includes(nodeData.name) ) {
40 | const originalNodeCreated = nodeType.prototype.onNodeCreated;
41 | nodeType.prototype.onNodeCreated = async function () {
42 | if (originalNodeCreated) {
43 | originalNodeCreated.apply(this, arguments);
44 | }
45 |
46 | const urlWidget = this.widgets.find((w) => w.name === "url");
47 | const modelWidget = this.widgets.find((w) => w.name === "model");
48 | const updateModels = async () => {
49 | const url = urlWidget.value;
50 | const prevValue = modelWidget.value
51 | modelWidget.value = ''
52 | modelWidget.options.values = []
53 |
54 | var models = await fetchModels(url);
55 |
56 | const text_signal ="(text)"
57 | models = models.filter(model => model.includes(text_signal))
58 |
59 | var add_text_models = ["llama3.1:latest (text)", "llama3:latest (text)", "qwen2:latest (text)",
60 | "phi3.5:latest (text)", "phi3:latest (text)", "trollek/qwen2-diffusion-prompter:latest (text)"]
61 | add_text_models.forEach(model => {
62 | if (!models.includes(model)) {
63 | models.unshift(model);}
64 | });
65 |
66 | // Update modelWidget options and value
67 | modelWidget.options.values = models;
68 | console.debug("Updated text modelWidget.options.values:", modelWidget.options.values);
69 |
70 | if (models.includes(prevValue)) {
71 | modelWidget.value = prevValue; // stay on current.
72 | } else if (models.length > 0) {
73 | modelWidget.value = models[0]; // set first as default.
74 | }
75 |
76 | console.debug("Updated text modelWidget.value:", modelWidget.value);
77 | };
78 |
79 |
80 |
81 | // Initial update
82 | await dummy(); //
83 |
84 | await updateModels();
85 | };
86 |
87 | } else if (["MyOllamaVision"].includes(nodeData.name) ) {
88 | const originalNodeCreated = nodeType.prototype.onNodeCreated;
89 | nodeType.prototype.onNodeCreated = async function () {
90 | if (originalNodeCreated) {
91 | originalNodeCreated.apply(this, arguments);
92 | }
93 |
94 | const urlWidget = this.widgets.find((w) => w.name === "url");
95 | const modelWidget = this.widgets.find((w) => w.name === "model");
96 | const updateModels = async () => {
97 | const url = urlWidget.value;
98 | const prevValue = modelWidget.value
99 | modelWidget.value = ''
100 | modelWidget.options.values = []
101 |
102 | var models = await fetchModels(url);
103 |
104 |
105 | const vision_signal ="(vision)"
106 | models = models.filter(model => model.includes(vision_signal))
107 |
108 |
109 | var add_vision_models = ["mskimomadto/chat-gph-vision:latest (vision)", "moondream:latest (vision)", "llava:latest (vision)",
110 | "minicpm-v:latest (vision)"]
111 | add_vision_models.forEach(model => {
112 | if (!models.includes(model)) {
113 | models.unshift(model);}
114 | });
115 |
116 | // Update modelWidget options and value
117 | modelWidget.options.values = models;
118 | console.debug("Updated vision modelWidget.options.values:", modelWidget.options.values);
119 |
120 | if (models.includes(prevValue)) {
121 | modelWidget.value = prevValue; // stay on current.
122 | } else if (models.length > 0) {
123 | modelWidget.value = models[0]; // set first as default.
124 | }
125 |
126 | console.debug("Updated vision modelWidget.value:", modelWidget.value);
127 | };
128 |
129 |
130 | // Initial update
131 | await dummy(); //
132 | await updateModels();
133 | };
134 | } else if (["MyOllamaDeleteModel"].includes(nodeData.name) ) {
135 | const originalNodeCreated = nodeType.prototype.onNodeCreated;
136 | nodeType.prototype.onNodeCreated = async function () {
137 | if (originalNodeCreated) {
138 | originalNodeCreated.apply(this, arguments);
139 | }
140 |
141 | const urlWidget = this.widgets.find((w) => w.name === "url");
142 | const modelWidget = this.widgets.find((w) => w.name === "model");
143 | const updateModels = async () => {
144 | const url = urlWidget.value;
145 | const prevValue = modelWidget.value
146 | modelWidget.value = ''
147 | modelWidget.options.values = []
148 |
149 | var models = await fetchModels(url);
150 |
151 | // Update modelWidget options and value
152 | modelWidget.options.values = models;
153 | console.debug("Delete modelWidget.options.values:", modelWidget.options.values);
154 |
155 | if (models.includes(prevValue)) {
156 | modelWidget.value = prevValue; // stay on current.
157 | } else if (models.length > 0) {
158 | modelWidget.value = models[0]; // set first as default.
159 | }
160 |
161 | console.debug("Delete modelWidget.value:", modelWidget.value);
162 | };
163 |
164 | // Initial update
165 | await dummy(); //
166 | await updateModels();
167 | };
168 | }
169 | },
170 | });
171 |
--------------------------------------------------------------------------------