├── .github
└── workflows
│ └── publish.yml
├── LICENSE
├── README.md
├── __init__.py
├── concatenate.py
├── imgs
└── nodes.png
├── ollama_chat.py
├── ollama_vision.py
└── pyproject.toml
/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
1 | name: Publish to Comfy registry
2 | on:
3 | workflow_dispatch:
4 | push:
5 | branches:
6 | - main
7 | - master
8 | paths:
9 | - "pyproject.toml"
10 |
11 | permissions:
12 | issues: write
13 |
14 | jobs:
15 | publish-node:
16 | name: Publish Custom Node to registry
17 | runs-on: ubuntu-latest
18 | if: ${{ github.repository_owner == 'fairy-root' }}
19 | steps:
20 | - name: Check out code
21 | uses: actions/checkout@v4
22 | - name: Publish Custom Node
23 | uses: Comfy-Org/publish-node-action@v1
24 | with:
25 | ## Add your own personal access token to your Github Repository secrets and reference it here.
26 | personal_access_token: ${{ secrets.Registry_Access_Token }}
27 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 FairyRoot
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ComfyUI Ollama Integration
2 |
3 | This repository, maintained by [fairy-root](https://github.com/fairy-root), provides custom nodes for [ComfyUI](https://github.com/comfyanonymous/ComfyUI), integrating with the Ollama API for language model interactions and offering text manipulation capabilities.
4 |
5 |
6 |

7 |
8 |
9 | ## Installation Guide
10 |
11 | ## Features
12 |
13 | - **Ollama Chat**: Interact with Ollama's language models, including streaming and logging capabilities.
14 | - **Concatenate Text LLMs**: Concatenate instructional text with prompts, offering customizable text formatting.
15 | - **Ollama Vision**: Loads Llava model and interacts with loaded images based on the user prompts.
16 |
17 | ---
18 |
19 | ## Installation
20 |
21 | ### Requirements
22 |
23 | - Python 3.x
24 | - [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
25 |
26 | ### Steps
27 |
28 | 1. Installing the node:
29 |
30 | - Goto `ComfyUI/custom_nodes` dir in **terminal(cmd)**
31 | - Clone the repository
32 | ```bash
33 | git clone https://github.com/fairy-root/comfyui-ollama-llms.git
34 | ```
35 | - Restart ComfyUI
36 |
37 | 2. Install required Python packages:
38 | ```bash
39 | pip install ollama
40 | ```
41 |
42 | ## Getting Started
43 |
44 | ### Obtaining Ollama Model
45 |
46 | **Phi3 is just an example since it is small and fast. You can choose any other models as well.**
47 |
48 | - To use the **Load Ollama LLms** node, you'll need to install **Ollama**. Visit [Ollama](https://ollama.com) and install the Ollama App for your OS, then in the terminal use the command:
49 | ```bash
50 | ollama pull phi3
51 | ```
52 | **or**
53 | ```bash
54 | ollama run phi3
55 | ```
56 |
57 | ## Donation
58 |
59 | Your support is appreciated:
60 |
61 | - USDt (TRC20): `TGCVbSSJbwL5nyXqMuKY839LJ5q5ygn2uS`
62 | - BTC: `13GS1ixn2uQAmFQkte6qA5p1MQtMXre6MT`
63 | - ETH (ERC20): `0xdbc7a7dafbb333773a5866ccf7a74da15ee654cc`
64 | - LTC: `Ldb6SDxUMEdYQQfRhSA3zi4dCUtfUdsPou`
65 |
66 | ## Author and Contact
67 |
68 | - GitHub: [FairyRoot](https://github.com/fairy-root)
69 | - Telegram: [@FairyRoot](https://t.me/FairyRoot)
70 |
71 | ## License
72 |
73 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
74 |
75 | ## Contributing
76 |
77 | Contributions are welcome! Please open an issue or submit a pull request for any improvements or features.
78 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | from .ollama_chat import OllamaService
2 | from .concatenate import ConcatenateText
3 | from .ollama_vision import LlavaVision
4 | from typing import ClassVar
5 |
6 |
7 | class Image:
8 | serialize_model: ClassVar[None] = None
9 |
10 |
11 | NODE_CLASS_MAPPINGS = {
12 | "ollama": OllamaService,
13 | "ConcatenateText": ConcatenateText,
14 | "llava": LlavaVision,
15 | }
16 |
17 | NODE_DISPLAY_NAME_MAPPINGS = {
18 | "ollama": "Ollama Chat",
19 | "ConcatenateText": "Concatenate Text Prompts LLMs",
20 | "llava": "Ollama Vision",
21 | }
22 |
23 | __all__ = ["NODE_CLASS_MAPPINGS", "NODE_DISPLAY_NAME_MAPPINGS"]
24 |
--------------------------------------------------------------------------------
/concatenate.py:
--------------------------------------------------------------------------------
1 | class ConcatenateText:
2 | @classmethod
3 | def INPUT_TYPES(cls):
4 | return {
5 | "required": {
6 | "instruction": ("STRING", {"multiline": True, "default": "Act as a creative problem solver that answers in prompts. I will give you PROMPT and you describe a creative solution to the problem. Use terse concise terms to describe the answer. Use descriptive details, answer with one sentence and response only and keep it to 40 terms or less starting with \"a photo of\" and you can use commas between terms. Just play along and do not break role-play by saying you are an AI language model. Just guess at the answer."}),
7 | "prompt": ("STRING", {"multiline": True, "default": "beautiful woman. close-up, depth of field, ray tracing"}),
8 | "separator": ("STRING", {"default": "PROMPT="})
9 | }
10 | }
11 |
12 | RETURN_TYPES = ("STRING",)
13 | FUNCTION = "concatenate"
14 | CATEGORY = "Ollama"
15 |
16 | def concatenate(self, instruction, prompt, separator):
17 | concatenated_text = instruction + separator + prompt
18 | return (concatenated_text,)
19 |
20 | # Node export details
21 | NODE_CLASS_MAPPINGS = {
22 | "ConcatenateText": ConcatenateText
23 | }
24 |
25 | NODE_DISPLAY_NAME_MAPPINGS = {
26 | "ConcatenateText": "Concatenate Text Prompts"
27 | }
--------------------------------------------------------------------------------
/imgs/nodes.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/fairy-root/comfyui-ollama-llms/9fc8ab1471181f47ef4e0b786ddeb4c0fdc6e1c4/imgs/nodes.png
--------------------------------------------------------------------------------
/ollama_chat.py:
--------------------------------------------------------------------------------
1 | import subprocess
2 | import sys
3 |
4 |
5 | def install_and_import(package):
6 | try:
7 | __import__(package)
8 | except ImportError:
9 | print(f"Package {package} not found. Installing...")
10 | subprocess.check_call([sys.executable, "-m", "pip", "install", package])
11 | finally:
12 | globals()[package] = __import__(package)
13 |
14 |
15 | # Install packages if not already installed
16 | install_and_import("ollama")
17 |
18 | # Import the installed packages
19 | import ollama
20 |
21 |
22 | class OllamaService:
23 | @classmethod
24 | def INPUT_TYPES(cls):
25 | # Load available models and API keys
26 | ollama_model_names = ["Unnamed Model"]
27 | try:
28 | ollama_models = ollama.list()["models"]
29 | ollama_model_names = [model.model for model in ollama_models]
30 | if not ollama_model_names:
31 | ollama_model_names = ["Unnamed Model"]
32 | ollama_model_names = [
33 | item for item in ollama_model_names if "llava" not in item.lower()
34 | ]
35 | except Exception as e:
36 | print(f"Warning: Could not connect to Ollama server to fetch models. Please ensure Ollama is running. Error: {e}")
37 | ollama_model_names = ["No Ollama Models Found"]
38 |
39 | return {
40 | "required": {
41 | "prompt": ("STRING", {"multiline": True, "default": "Enter your prompt here..."}),
42 | },
43 | "optional": {
44 | "ollama_model": (
45 | ollama_model_names,
46 | {"default": ollama_model_names[0] if ollama_model_names else ""},
47 | ),
48 | "Console_log": ("BOOLEAN", {"default": True}),
49 | },
50 | }
51 |
52 | RETURN_TYPES = ("STRING",)
53 | FUNCTION = "execute"
54 | CATEGORY = "Ollama"
55 |
56 | def execute(self, prompt, ollama_model=None, stream=True, Console_log=False):
57 | return self._ollama_interaction(ollama_model, prompt, stream, Console_log)
58 |
59 | def _ollama_interaction(self, model, prompt, stream, Console_log):
60 | try:
61 | response_stream = ollama.chat(
62 | model=model,
63 | messages=[{"role": "user", "content": prompt}],
64 | stream=stream,
65 | )
66 |
67 | output_text = ""
68 | if stream:
69 | for chunk in response_stream:
70 | output_text += chunk["message"]["content"]
71 | if Console_log:
72 | print(chunk["message"]["content"], end="", flush=True)
73 | else:
74 | for chunk in response_stream:
75 | output_text += chunk["message"]["content"]
76 | output_text = str(output_text)
77 | return (output_text,)
78 |
79 | except Exception as e:
80 | print("Error:", e)
81 |
82 | @classmethod
83 | def IS_CHANGED(cls, *args):
84 | return True
85 |
86 |
87 | # Node export details
88 | NODE_CLASS_MAPPINGS = {"ollama": OllamaService}
89 |
90 | NODE_DISPLAY_NAME_MAPPINGS = {"ollama": "Ollama Chat"}
91 |
--------------------------------------------------------------------------------
/ollama_vision.py:
--------------------------------------------------------------------------------
1 | import subprocess
2 | import sys
3 | import numpy as np
4 | import base64
5 | from PIL import Image
6 | from io import BytesIO
7 |
8 |
9 | def install_and_import(package):
10 | try:
11 | __import__(package)
12 | except ImportError:
13 | print(f"Package {package} not found. Installing...")
14 | subprocess.check_call([sys.executable, "-m", "pip", "install", package])
15 | finally:
16 | globals()[package] = __import__(package)
17 |
18 |
19 | # Install packages if not already installed
20 | install_and_import("ollama")
21 |
22 | # Import the installed packages
23 | import ollama
24 |
25 |
26 | class LlavaVision:
27 | @classmethod
28 | def INPUT_TYPES(cls):
29 | # Load available models and API keys
30 | ollama_model_names = ["Unnamed Model"]
31 | try:
32 | ollama_models = ollama.list()["models"]
33 | ollama_model_names = [
34 | model.model
35 | for model in ollama_models
36 | if "llava" in model.model.lower()
37 | or "vision" in model.model.lower()
38 | or "vlm" in model.model.lower()
39 | or model.model.lower().endswith("-v")
40 | ]
41 | if not ollama_model_names:
42 | ollama_model_names = ["Unnamed Model"]
43 | except Exception as e:
44 | print(f"Warning: Could not connect to Ollama server to fetch models. Please ensure Ollama is running. Error: {e}")
45 | ollama_model_names = ["No Ollama Vision Models Found"]
46 |
47 | return {
48 | "required": {
49 | "prompt": ("STRING", {"multiline": True, "default": "Describe this image."}),
50 | "images": ("IMAGE",),
51 | },
52 | "optional": {
53 | "ollama_model": (
54 | ollama_model_names,
55 | {"default": ollama_model_names[0] if ollama_model_names else ""},
56 | ),
57 | },
58 | }
59 |
60 | RETURN_TYPES = ("STRING",)
61 | FUNCTION = "execute"
62 | CATEGORY = "Ollama"
63 |
64 | def execute(self, prompt, images=None, ollama_model=None):
65 |
66 | images_encoded = []
67 |
68 | if images is not None:
69 | for batch_number, image in enumerate(images):
70 | i = 255.0 * image.cpu().numpy()
71 | img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
72 | buffered = BytesIO()
73 | img.save(buffered, format="PNG")
74 | img_bytes = base64.b64encode(buffered.getvalue())
75 | images_encoded.append(str(img_bytes, "utf-8"))
76 | else:
77 | return ("Error: No image provided", "Error: No image provided")
78 |
79 | return self._ollama_interaction(ollama_model, prompt, images_encoded)
80 |
81 | def _ollama_interaction(self, model, prompt, images):
82 | try:
83 | res = ollama.chat(
84 | model=model,
85 | messages=[{"role": "user", "content": prompt, "images": images}],
86 | )
87 |
88 | output_text = ""
89 | output_text += res["message"]["content"]
90 | output_text = str(output_text)
91 | return (output_text,)
92 |
93 | except Exception as e:
94 | print(f"Error during Ollama interaction: {e}")
95 | return (
96 | f"Error during Ollama interaction: {e}",
97 | f"Error during Ollama interaction: {e}",
98 | )
99 |
100 | @classmethod
101 | def IS_CHANGED(cls, *args):
102 | return True
103 |
104 |
105 | # Node export details
106 | NODE_CLASS_MAPPINGS = {"llava": LlavaVision}
107 |
108 | NODE_DISPLAY_NAME_MAPPINGS = {"llava": "Ollama Vision"}
109 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "comfyui-ollama-llms"
3 | description = "Ollama and Llava / vision integration for ComfyUI"
4 | version = "1.0.5"
5 | license = {file = "LICENSE"}
6 |
7 | [project.urls]
8 | Repository = "https://github.com/fairy-root/comfyui-ollama-llms"
9 | # Used by Comfy Registry https://comfyregistry.org
10 |
11 | [tool.comfy]
12 | PublisherId = "fairy-root"
13 | DisplayName = "comfyui-ollama-llms"
14 | Icon = ""
15 |
--------------------------------------------------------------------------------