6 |
7 | This repository provides custom nodes for ComfyUI, enabling direct access to **BRIA's API endpoints** for image generation and editing workflows. **API documentation** is available [**here**](https://docs.bria.ai/).
8 |
9 | BRIA's APIs and models are built for commercial use and trained on 100% licensed data and does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
10 |
11 | An API token is required to use the nodes in your workflows. Get started quickly here
12 |
13 |
14 | .
15 |
16 | for direct API endpoint use, you can find our APIs through partners like [**fal.ai**](https://fal.ai/models?keywords=bria).
17 | For source code and weigths access, go to our [**Hugging Face**](https://huggingface.co/briaai) space.
18 |
19 | To load a workflow, import the compatible workflow.json files from this [folder](workflows).
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 | # Available Nodes
31 |
32 | ## Image Generation Nodes
33 | These nodes create high-quality images from text or image prompts, generating photorealistic or artistic results with support for various aspect ratios.
34 |
35 | | Node | Description |
36 | |------------------------|--------------------------------------------------------------------|
37 | | **Text2Image Base** | Generates images from text prompts, serving as the foundation for text-based image creation. |
38 | | **Text2Image Fast** | Optimized for speed, this node generates images from text prompts with faster results while maintaining quality. |
39 | | **Text2Image HD** | Optimized for high-resolution outputs, this node generates detailed and sharp visuals from text prompts. |
40 | | **Reimagine** | Guides image generation using both prompts and an input image. Preserve the original structure and depth while introducing new materials, colors, and textures. |
41 |
42 | ## Tailored Generation Nodes
43 | These nodes use pre-trained tailored models to generate images that faithfully reproduce specific visual IP elements or guidelines.
44 |
45 | | Node | Description |
46 | |------------------------|--------------------------------------------------------------------|
47 | | **Tailored Gen** | Generates images using a trained tailored model, reproducing specific visual IP elements or guidelines. Use the Tailored Model Info node to load the model's default settings. |
48 | | **Tailored Model Info**| Retrieves the default settings and prompt prefix of a trained tailored model, which can be used to configure the Tailored Gen node. |
49 | | **Restyle Portrait** | Transforms the style of a portrait while preserving the person's facial features. |
50 |
51 | ## Image Editing Nodes
52 | These nodes modify specific parts of images, enabling adjustments while maintaining the integrity of the rest of the image.
53 |
54 | | Node | Description |
55 | |------------------------|--------------------------------------------------------------------|
56 | | **RMBG 2.0 (Remove Background)** | Removes the background from an image, isolating the foreground subject. |
57 | | **Replace Background** | Replaces an image’s background with a new one, guided by either a reference image or a prompt. |
58 | | **Expand Image** | Expands the dimensions of an image, generating new content to fill the extended areas. |
59 | | **Eraser** | Removes specific objects or areas from an image by providing a mask. |
60 | | **GenFill** | Generates objects by prompt in a specific region of an image. |
61 | | **Erase Foreground** | Removes the foreground from an image, isolating the background. |
62 |
63 | ## Product Shot Editing Nodes
64 | These nodes create high-quality product images for eCommerce workflows.
65 |
66 | | Node | Description |
67 | |------------------------|--------------------------------------------------------------------|
68 | | **ShotByText** | Modifies an image's background by providing a text prompt. Powered by BRIA's ControlNet Background-Generation. |
69 | | **ShotByImage** | Modifies an image's background by providing a reference image. Uses BRIA's ControlNet Background-Generation and Image-Prompt. |
70 |
71 | # Installation
72 | There are two methods to install the BRIA ComfyUI API nodes:
73 |
74 | ### Method 1: Using ComfyUI's Custom Node Manager
75 | 1. Open ComfyUI.
76 | 2. Navigate to the [**Custom Node Manager**](https://github.com/ltdrdata/ComfyUI-Manager).
77 | 3. Click on 'Install Missing Nodes' or search for BRIA API and install the node from the manager.
78 |
79 | ### Method 2: Git Clone
80 | 1. Navigate to the `custom_nodes` directory of your ComfyUI installation:
81 | ```bash
82 | cd path_to_comfyui/custom_nodes
83 | ```
84 | 2. Clone this repository:
85 | ```bash
86 | git clone https://github.com/your-repo-link/ComfyUI-BRIA-API.git
87 | ```
88 |
89 | 3. Restart ComfyUI and load the workflows.
90 |
91 |
93 |
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
1 | from .nodes import (EraserNode, GenFillNode, ImageExpansionNode, ReplaceBgNode, RmbgNode, RemoveForegroundNode, ShotByTextNode, ShotByImageNode, TailoredGenNode,
2 | TailoredModelInfoNode, Text2ImageBaseNode, Text2ImageFastNode, Text2ImageHDNode, TailoredPortraitNode,
3 | ReimagineNode)
4 | # Map the node class to a name used internally by ComfyUI
5 | NODE_CLASS_MAPPINGS = {
6 | "BriaEraser": EraserNode, # Return the class, not an instance
7 | "BriaGenFill": GenFillNode,
8 | "ImageExpansionNode": ImageExpansionNode,
9 | "ReplaceBgNode": ReplaceBgNode,
10 | "RmbgNode": RmbgNode,
11 | "RemoveForegroundNode": RemoveForegroundNode,
12 | "ShotByTextNode": ShotByTextNode,
13 | "ShotByImageNode": ShotByImageNode,
14 | "BriaTailoredGen": TailoredGenNode,
15 | "TailoredModelInfoNode": TailoredModelInfoNode,
16 | "TailoredPortraitNode": TailoredPortraitNode,
17 | "Text2ImageBaseNode": Text2ImageBaseNode,
18 | "Text2ImageFastNode": Text2ImageFastNode,
19 | "Text2ImageHDNode": Text2ImageHDNode,
20 | "ReimagineNode": ReimagineNode,
21 | }
22 | # Map the node display name to the one shown in the ComfyUI node interface
23 | NODE_DISPLAY_NAME_MAPPINGS = {
24 | "BriaEraser": "Bria Eraser",
25 | "BriaGenFill": "Bria GenFill",
26 | "ImageExpansionNode": "Bria Image Expansion",
27 | "ReplaceBgNode": "Bria Replace Background",
28 | "RmbgNode": "Bria RMBG",
29 | "RemoveForegroundNode": "Bria Remove Foreground",
30 | "ShotByTextNode": "Bria Shot By Text",
31 | "ShotByImageNode": "Bria Shot By Image",
32 | "BriaTailoredGen": "Bria Tailored Gen",
33 | "TailoredModelInfoNode": "Bria Tailored Model Info",
34 | "TailoredPortraitNode": "Bria Restyle Portrait",
35 | "Text2ImageBaseNode": "Bria Text2Image Base",
36 | "Text2ImageFastNode": "Bria Text2Image Fast",
37 | "Text2ImageHDNode": "Bria Text2Image HD",
38 | "ReimagineNode": "Bria Reimagine",
39 | }
40 |
--------------------------------------------------------------------------------
/images/Bria Logo.svg:
--------------------------------------------------------------------------------
1 |
22 |
--------------------------------------------------------------------------------
/images/background_workflow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Bria-AI/ComfyUI-BRIA-API/bae0ed38428efa8d38c8f20753a345646524c204/images/background_workflow.png
--------------------------------------------------------------------------------
/nodes/__init__.py:
--------------------------------------------------------------------------------
1 | from .eraser_node import EraserNode
2 | from .generative_fill_node import GenFillNode
3 | from .image_expansion_node import ImageExpansionNode
4 | from .replace_bg_node import ReplaceBgNode
5 | from .rmbg_node import RmbgNode
6 | from .remove_foreground_node import RemoveForegroundNode
7 | from .shot_by_text_node import ShotByTextNode
8 | from .shot_by_image_node import ShotByImageNode
9 | from .tailored_gen_node import TailoredGenNode
10 | from .tailored_model_info_node import TailoredModelInfoNode
11 | from .tailored_portrait_node import TailoredPortraitNode
12 | from .text_2_image_base_node import Text2ImageBaseNode
13 | from .text_2_image_fast_node import Text2ImageFastNode
14 | from .text_2_image_hd_node import Text2ImageHDNode
15 | from .reimagine_node import ReimagineNode
--------------------------------------------------------------------------------
/nodes/common.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from PIL import Image
3 | import io
4 | import torch
5 | import base64
6 | from torchvision.transforms import ToPILImage
7 | import requests
8 |
9 | def postprocess_image(image):
10 | result_image = Image.open(io.BytesIO(image))
11 | result_image = result_image.convert("RGB")
12 | result_image = np.array(result_image).astype(np.float32) / 255.0
13 | result_image = torch.from_numpy(result_image)[None,]
14 | return result_image
15 |
16 | def image_to_base64(pil_image):
17 | # Convert a PIL image to a base64-encoded string
18 | buffered = io.BytesIO()
19 | pil_image.save(buffered, format="PNG") # Save the image to the buffer in PNG format
20 | buffered.seek(0) # Rewind the buffer to the beginning
21 | return base64.b64encode(buffered.getvalue()).decode('utf-8')
22 |
23 | def preprocess_image(image):
24 | if isinstance(image, torch.Tensor):
25 | # Print image shape for debugging
26 | if image.dim() == 4: # (batch_size, height, width, channels)
27 | image = image.squeeze(0) # Remove the batch dimension (1)
28 | # Convert to PIL after permuting to (height, width, channels)
29 | image = ToPILImage()(image.permute(2, 0, 1)) # (height, width, channels)
30 | else:
31 | print("Unexpected image dimensions. Expected 4D tensor.")
32 | return image
33 |
34 |
35 | def preprocess_mask(mask):
36 | if isinstance(mask, torch.Tensor):
37 | # Print mask shape for debugging
38 | if mask.dim() == 3: # (batch_size, height, width)
39 | mask = mask.squeeze(0) # Remove the batch dimension (1)
40 | # Convert to PIL (grayscale mask)
41 | mask = ToPILImage()(mask) # No permute needed for grayscale
42 | else:
43 | print("Unexpected mask dimensions. Expected 3D tensor.")
44 | return mask
45 |
46 |
47 | def process_request(api_url, image, mask, api_key):
48 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
49 | raise Exception("Please insert a valid API key.")
50 |
51 | # Check if image and mask are tensors, if so, convert to NumPy arrays
52 | if isinstance(image, torch.Tensor):
53 | image = preprocess_image(image)
54 | if isinstance(mask, torch.Tensor):
55 | mask = preprocess_mask(mask)
56 |
57 | # Convert the image and mask directly to Base64 strings
58 | image_base64 = image_to_base64(image)
59 | mask_base64 = image_to_base64(mask)
60 |
61 | # Prepare the API request payload
62 | payload = {
63 | "file": f"{image_base64}",
64 | "mask_file": f"{mask_base64}"
65 | }
66 |
67 | headers = {
68 | "Content-Type": "application/json",
69 | "api_token": f"{api_key}"
70 | }
71 |
72 | try:
73 | response = requests.post(api_url, json=payload, headers=headers)
74 | # Check for successful response
75 | if response.status_code == 200:
76 | print('response is 200')
77 | # Process the output image from API response
78 | response_dict = response.json()
79 | image_response = requests.get(response_dict['result_url'])
80 | result_image = Image.open(io.BytesIO(image_response.content))
81 | result_image = result_image.convert("RGBA")
82 | result_image = np.array(result_image).astype(np.float32) / 255.0
83 | result_image = torch.from_numpy(result_image)[None,]
84 | # image_tensor = image_tensor = ToTensor()(output_image)
85 | # image_tensor = image_tensor.permute(1, 2, 0) / 255.0 # Shape now becomes [1, 2200, 1548, 3]
86 | # print(f"output tensor shape is: {image_tensor.shape}")
87 | return (result_image,)
88 | else:
89 | raise Exception(f"Error: API request failed with status code {response.status_code}")
90 |
91 | except Exception as e:
92 | raise Exception(f"{e}")
93 |
--------------------------------------------------------------------------------
/nodes/eraser_node.py:
--------------------------------------------------------------------------------
1 | from .common import process_request
2 |
3 | class EraserNode():
4 | @classmethod
5 | def INPUT_TYPES(self):
6 | return {
7 | "required": {
8 | "image": ("IMAGE",), # Input image from another node
9 | "mask": ("MASK",), # Binary mask input
10 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}) # API Key input with a default value
11 | }
12 | }
13 |
14 | RETURN_TYPES = ("IMAGE",)
15 | RETURN_NAMES = ("output_image",)
16 | CATEGORY = "API Nodes"
17 | FUNCTION = "execute" # This is the method that will be executed
18 |
19 | def __init__(self):
20 | self.api_url = "https://engine.prod.bria-api.com/v1/eraser" # Eraser API URL
21 |
22 | # Define the execute method as expected by ComfyUI
23 | def execute(self, image, mask, api_key):
24 | return process_request(self.api_url, image, mask, api_key)
25 |
--------------------------------------------------------------------------------
/nodes/generative_fill_node.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import requests
3 | from PIL import Image
4 | import io
5 | import torch
6 |
7 | from .common import image_to_base64, preprocess_image, preprocess_mask
8 |
9 |
10 | class GenFillNode():
11 | @classmethod
12 | def INPUT_TYPES(self):
13 | return {
14 | "required": {
15 | "image": ("IMAGE",), # Input image from another node
16 | "mask": ("MASK",), # Binary mask input
17 | "prompt": ("STRING",),
18 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}), # API Key input with a default value
19 | },
20 | "optional": {
21 | "seed": ("INT", {"default": 123456})
22 | }
23 | }
24 |
25 | RETURN_TYPES = ("IMAGE",)
26 | RETURN_NAMES = ("output_image",)
27 | CATEGORY = "API Nodes"
28 | FUNCTION = "execute" # This is the method that will be executed
29 |
30 | def __init__(self):
31 | self.api_url = "https://engine.prod.bria-api.com/v1/gen_fill" # Eraser API URL
32 |
33 | # Define the execute method as expected by ComfyUI
34 | def execute(self, image, mask, prompt, api_key, seed):
35 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
36 | raise Exception("Please insert a valid API key.")
37 |
38 | # Check if image and mask are tensors, if so, convert to NumPy arrays
39 | if isinstance(image, torch.Tensor):
40 | image = preprocess_image(image)
41 | if isinstance(mask, torch.Tensor):
42 | mask = preprocess_mask(mask)
43 |
44 | # Convert the image and mask directly to Base64 strings
45 | image_base64 = image_to_base64(image)
46 | mask_base64 = image_to_base64(mask)
47 |
48 | # Prepare the API request payload
49 | payload = {
50 | "file": f"{image_base64}",
51 | "mask_file": f"{mask_base64}",
52 | "prompt": prompt,
53 | "negative_prompt": "blurry",
54 | "sync": True,
55 | "seed": seed,
56 | }
57 |
58 | headers = {
59 | "Content-Type": "application/json",
60 | "api_token": f"{api_key}"
61 | }
62 |
63 | try:
64 | response = requests.post(self.api_url, json=payload, headers=headers)
65 | # Check for successful response
66 | if response.status_code == 200:
67 | print('response is 200')
68 | # Process the output image from API response
69 | response_dict = response.json()
70 | image_response = requests.get(response_dict['urls'][0])
71 | result_image = Image.open(io.BytesIO(image_response.content))
72 | result_image = result_image.convert("RGB")
73 | result_image = np.array(result_image).astype(np.float32) / 255.0
74 | result_image = torch.from_numpy(result_image)[None,]
75 | return (result_image,)
76 | else:
77 | raise Exception(f"Error: API request failed with status code {response.status_code}")
78 |
79 | except Exception as e:
80 | raise Exception(f"{e}")
81 |
--------------------------------------------------------------------------------
/nodes/image_expansion_node.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import requests
3 | from PIL import Image
4 | import io
5 | import torch
6 |
7 | from .common import image_to_base64, preprocess_image
8 |
9 |
10 | class ImageExpansionNode():
11 | @classmethod
12 | def INPUT_TYPES(self):
13 | return {
14 | "required": {
15 | "image": ("IMAGE",), # Input image from another node
16 | "original_image_size": ("STRING",),
17 | "original_image_location": ("STRING",),
18 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}), # API Key input with a default value
19 | },
20 |
21 | "optional": {
22 | "canvas_size": ("STRING", {"default": "1000, 1000"}),
23 | "prompt": ("STRING", {"default": ""}),
24 | "seed": ("INT", {"default": 681794}),
25 | "negative_prompt": ("STRING", {"default": "Ugly, mutated"}),
26 | "content_moderation": ("BOOLEAN", {"default": False}),
27 | }
28 | }
29 |
30 | RETURN_TYPES = ("IMAGE",)
31 | RETURN_NAMES = ("output_image",)
32 | CATEGORY = "API Nodes"
33 | FUNCTION = "execute" # This is the method that will be executed
34 |
35 | def __init__(self):
36 | self.api_url = "https://engine.prod.bria-api.com/v1/image_expansion" # Image Expansion API URL
37 |
38 | # Define the execute method as expected by ComfyUI
39 | def execute(self, image,
40 | original_image_size,
41 | original_image_location,
42 | canvas_size,
43 | prompt,
44 | seed,
45 | negative_prompt,
46 | content_moderation,
47 | api_key):
48 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
49 | raise Exception("Please insert a valid API key.")
50 |
51 | original_image_size = [int(x.strip()) for x in original_image_size.split(",")]
52 | original_image_location = [int(x.strip()) for x in original_image_location.split(",")]
53 | canvas_size = [int(x.strip()) for x in canvas_size.split(",")]
54 |
55 | if prompt == "":
56 | prompt = None
57 | if negative_prompt == "":
58 | negative_prompt = " " # hack to avoid error in triton which expects non-empty string
59 |
60 | # Check if image and mask are tensors, if so, convert to NumPy arrays
61 | if isinstance(image, torch.Tensor):
62 | image = preprocess_image(image)
63 |
64 | # Convert the image directly to Base64 string
65 | image_base64 = image_to_base64(image)
66 |
67 | # Prepare the API request payload
68 | payload = {
69 | "file": f"{image_base64}",
70 | "original_image_size": original_image_size,
71 | "original_image_location": original_image_location,
72 | "canvas_size": canvas_size,
73 | "prompt": prompt,
74 | "negative_prompt": negative_prompt,
75 | "seed": seed,
76 | "content_moderation": content_moderation
77 | }
78 |
79 | headers = {
80 | "Content-Type": "application/json",
81 | "api_token": f"{api_key}"
82 | }
83 |
84 | try:
85 | response = requests.post(self.api_url, json=payload, headers=headers)
86 | # Check for successful response
87 | if response.status_code == 200:
88 | print('response is 200')
89 | # Process the output image from API response
90 | response_dict = response.json()
91 | image_response = requests.get(response_dict['result_url'])
92 | result_image = Image.open(io.BytesIO(image_response.content))
93 | result_image = result_image.convert("RGB")
94 | result_image = np.array(result_image).astype(np.float32) / 255.0
95 | result_image = torch.from_numpy(result_image)[None,]
96 | return (result_image,)
97 | else:
98 | raise Exception(f"Error: API request failed with status code {response.status_code}")
99 |
100 | except Exception as e:
101 | raise Exception(f"{e}")
102 |
--------------------------------------------------------------------------------
/nodes/reimagine_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from .common import postprocess_image, preprocess_image, image_to_base64
4 |
5 |
6 | class ReimagineNode():
7 | @classmethod
8 | def INPUT_TYPES(self):
9 | return {
10 | "required": {
11 | "api_key": ("STRING", ),
12 | "prompt": ("STRING",),
13 | },
14 | "optional": {
15 | "seed": ("INT", {"default": -1}),
16 | "steps_num": ("INT", {"default": 12}), # if used with tailored, possibly get this from the tailored model info node
17 | "structure_ref_influence": ("FLOAT", {"default": 0.75}),
18 | "fast": ("INT", {"default": 0}), # if used with tailored, possibly get this from the tailored model info node
19 | "structure_image": ("IMAGE", ),
20 | "tailored_model_id": ("STRING", ),
21 | "tailored_model_influence": ("FLOAT", {"default": 0.5}),
22 | "tailored_generation_prefix": ("STRING",), # if used with tailored, possibly get this from the tailored model info node
23 | "content_moderation": ("INT", {"default": 0}),
24 | }
25 | }
26 |
27 | RETURN_TYPES = ("IMAGE",)
28 | RETURN_NAMES = ("output_image",)
29 | CATEGORY = "API Nodes"
30 | FUNCTION = "execute" # This is the method that will be executed
31 |
32 | def __init__(self):
33 | self.api_url = "https://engine.prod.bria-api.com/v1/reimagine" #"http://0.0.0.0:5000/v1/reimagine"
34 |
35 | def execute(
36 | self, api_key, prompt, seed,
37 | steps_num, fast, structure_ref_influence, structure_image=None,
38 | tailored_model_id=None, tailored_model_influence=None, tailored_generation_prefix=None,
39 | content_moderation=0,
40 | ):
41 | payload = {
42 | "prompt": tailored_generation_prefix + prompt,
43 | "num_results": 1,
44 | "sync": True,
45 | "seed": seed,
46 | "steps_num": steps_num,
47 | "include_generation_prefix": False,
48 | "content_moderation": content_moderation,
49 | }
50 | if structure_image is not None:
51 | structure_image = preprocess_image(structure_image)
52 | structure_image = image_to_base64(structure_image)
53 | payload["structure_image_file"] = structure_image
54 | payload["structure_ref_influence"] = structure_ref_influence
55 | if tailored_model_id is not None and tailored_model_id != "":
56 | payload["tailored_model_id"] = tailored_model_id
57 | payload["tailored_model_influence"] = tailored_model_influence
58 | response = requests.post(
59 | self.api_url,
60 | json=payload,
61 | headers={"api_token": api_key}
62 | )
63 | if response.status_code == 200:
64 | response_dict = response.json()
65 | image_response = requests.get(response_dict['result'][0]["urls"][0])
66 | result_image = postprocess_image(image_response.content)
67 | return (result_image,)
68 | else:
69 | raise Exception(f"Error: API request failed with status code {response.status_code} and text {response.text}")
70 |
--------------------------------------------------------------------------------
/nodes/remove_foreground_node.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import requests
3 | from PIL import Image
4 | import io
5 | import torch
6 |
7 | from .common import preprocess_image, image_to_base64
8 |
9 |
10 | class RemoveForegroundNode():
11 | @classmethod
12 | def INPUT_TYPES(self):
13 | return {
14 | "required": {
15 | "image": ("IMAGE",), # Input image from another node
16 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}), # API Key input with a default value
17 | },
18 | "optional": {
19 | "content_moderation": ("BOOLEAN", {"default": False}),
20 | }
21 | }
22 |
23 | RETURN_TYPES = ("IMAGE",)
24 | RETURN_NAMES = ("output_image",)
25 | CATEGORY = "API Nodes"
26 | FUNCTION = "execute" # This is the method that will be executed
27 |
28 | def __init__(self):
29 | self.api_url = "https://engine.internal.prod.bria-api.com/v1/erase_foreground" # remove foreground API URL
30 |
31 | # Define the execute method as expected by ComfyUI
32 | def execute(self, image, content_moderation, api_key):
33 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
34 | raise Exception("Please insert a valid API key.")
35 |
36 | # Check if image is tensor, if so, convert to NumPy array
37 | if isinstance(image, torch.Tensor):
38 | image = preprocess_image(image)
39 |
40 | # Prepare the API request payload
41 | # temporary save the image to /tmp
42 | # temp_img_path = "/tmp/temp_img.jpeg"
43 | # image.save(temp_img_path, format="JPEG")
44 |
45 | # files=[('file',('temp_img.jpeg', open(temp_img_path, 'rb'),'image/jpeg'))
46 | # ]
47 | payload = {"file": image_to_base64(image), "content_moderation": content_moderation}
48 |
49 | headers = {
50 | "Content-Type": "application/json",
51 | "api_token": f"{api_key}"
52 | }
53 |
54 | try:
55 | response = requests.post(self.api_url, json=payload, headers=headers)
56 | # Check for successful response
57 | if response.status_code == 200:
58 | print('response is 200')
59 | # Process the output image from API response
60 | response_dict = response.json()
61 | image_response = requests.get(response_dict['result_url'])
62 | result_image = Image.open(io.BytesIO(image_response.content))
63 | result_image = np.array(result_image).astype(np.float32) / 255.0
64 | result_image = torch.from_numpy(result_image)[None,]
65 | return (result_image,)
66 | else:
67 | raise Exception(f"Error: API request failed with status code {response.status_code}")
68 |
69 | except Exception as e:
70 | raise Exception(f"{e}")
71 |
--------------------------------------------------------------------------------
/nodes/replace_bg_node.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import requests
3 | from PIL import Image
4 | import io
5 | import torch
6 |
7 | from .common import image_to_base64, preprocess_image, preprocess_mask
8 |
9 |
10 | class ReplaceBgNode():
11 | @classmethod
12 | def INPUT_TYPES(self):
13 | return {
14 | "required": {
15 | "image": ("IMAGE",), # Input image from another node
16 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}), # API Key input with a default value
17 | },
18 | "optional": {
19 | "fast": ("BOOLEAN", {"default": True}),
20 | "bg_prompt": ("STRING",),
21 | "ref_image": ("IMAGE",), # Input ref image from another node
22 | "refine_prompt": ("BOOLEAN", {"default": True}),
23 | "enhance_ref_image": ("BOOLEAN", {"default": True}),
24 | "original_quality": ("BOOLEAN", {"default": False}),
25 | "force_rmbg": ("BOOLEAN", {"default": False}),
26 | "negative_prompt": ("STRING", {"default": None}),
27 | "seed": ("INT", {"default": 681794}),
28 | "content_moderation": ("BOOLEAN", {"default": False}),
29 | }
30 | }
31 |
32 | RETURN_TYPES = ("IMAGE",)
33 | RETURN_NAMES = ("output_image",)
34 | CATEGORY = "API Nodes"
35 | FUNCTION = "execute" # This is the method that will be executed
36 |
37 | def __init__(self):
38 | self.api_url = "https://engine.prod.bria-api.com/v1/background/replace" # Replace BG API URL
39 |
40 | # Define the execute method as expected by ComfyUI
41 | def execute(self, image, fast,
42 | refine_prompt,
43 | enhance_ref_image,
44 | original_quality,
45 | force_rmbg,
46 | negative_prompt,
47 | seed,
48 | api_key,
49 | content_moderation,
50 | bg_prompt=None,
51 | ref_image=None,):
52 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
53 | raise Exception("Please insert a valid API key.")
54 |
55 | # Check if image and mask are tensors, if so, convert to NumPy arrays
56 | if isinstance(image, torch.Tensor):
57 | image = preprocess_image(image)
58 |
59 | # Convert the image and mask directly to Base64 strings
60 | image_base64 = image_to_base64(image)
61 | ref_image_file = None # initialization, will be updated if it is supplied
62 | if ref_image is not None:
63 | ref_image = preprocess_image(ref_image)
64 | ref_image_file = image_to_base64(ref_image)
65 |
66 | # Prepare the API request payload
67 | payload = {
68 | "file": f"{image_base64}",
69 | "fast": fast,
70 | "bg_prompt": bg_prompt,
71 | "ref_image_file": ref_image_file,
72 | "refine_prompt": refine_prompt,
73 | "enhance_ref_image": enhance_ref_image,
74 | "original_quality": original_quality,
75 | "force_rmbg": force_rmbg,
76 | "negative_prompt": negative_prompt,
77 | "seed": seed,
78 | "sync": True,
79 | "num_results": 1,
80 | "content_moderation": content_moderation
81 | }
82 |
83 | headers = {
84 | "Content-Type": "application/json",
85 | "api_token": f"{api_key}"
86 | }
87 |
88 | try:
89 | response = requests.post(self.api_url, json=payload, headers=headers)
90 | # Check for successful response
91 | if response.status_code == 200:
92 | print('response is 200')
93 | # Process the output image from API response
94 | response_dict = response.json()
95 | image_response = requests.get(response_dict['result'][0][0]) # first indexing for batched, second for url
96 | result_image = Image.open(io.BytesIO(image_response.content))
97 | result_image = result_image.convert("RGB")
98 | result_image = np.array(result_image).astype(np.float32) / 255.0
99 | result_image = torch.from_numpy(result_image)[None,]
100 | return (result_image,)
101 | else:
102 | raise Exception(f"Error: API request failed with status code {response.status_code}")
103 |
104 | except Exception as e:
105 | raise Exception(f"{e}")
106 |
--------------------------------------------------------------------------------
/nodes/rmbg_node.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import requests
3 | from PIL import Image
4 | import io
5 | import torch
6 |
7 | from .common import preprocess_image
8 | from io import BytesIO
9 |
10 | class RmbgNode():
11 | @classmethod
12 | def INPUT_TYPES(self):
13 | return {
14 | "required": {
15 | "image": ("IMAGE",), # Input image from another node
16 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}), # API Key input with a default value
17 | },
18 | "optional": {
19 | "content_moderation": ("BOOLEAN", {"default": False}),
20 | }
21 | }
22 |
23 | RETURN_TYPES = ("IMAGE",)
24 | RETURN_NAMES = ("output_image",)
25 | CATEGORY = "API Nodes"
26 | FUNCTION = "execute" # This is the method that will be executed
27 |
28 | def __init__(self):
29 | self.api_url = "https://engine.prod.bria-api.com/v1/background/remove" # RMBG API URL
30 |
31 | # Define the execute method as expected by ComfyUI
32 | def execute(self, image, content_moderation, api_key):
33 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
34 | raise Exception("Please insert a valid API key.")
35 |
36 | # Check if image is tensor, if so, convert to NumPy array
37 | if isinstance(image, torch.Tensor):
38 | image = preprocess_image(image)
39 |
40 | # Prepare the API request payload
41 | image_buffer = BytesIO()
42 | image.save(image_buffer, format="JPEG")
43 |
44 | # Get binary data from buffer
45 | image_buffer.seek(0) # Move cursor to the start of the buffer
46 | binary_data = image_buffer.read()
47 |
48 | files=[('file',('temp_img.jpeg', BytesIO(binary_data),'image/jpeg'))]
49 | payload = {"content_moderation": content_moderation}
50 |
51 | try:
52 | response = requests.post(self.api_url, data=payload, headers={"api_token": api_key}, files=files)
53 | # Check for successful response
54 | if response.status_code == 200:
55 | print('response is 200')
56 | # Process the output image from API response
57 | response_dict = response.json()
58 | image_response = requests.get(response_dict['result_url'])
59 | result_image = Image.open(io.BytesIO(image_response.content))
60 | result_image = np.array(result_image).astype(np.float32) / 255.0
61 | result_image = torch.from_numpy(result_image)[None,]
62 | return (result_image,)
63 | else:
64 | raise Exception(f"Error: API request failed with status code {response.status_code}")
65 |
66 | except Exception as e:
67 | raise Exception(f"{e}")
68 |
--------------------------------------------------------------------------------
/nodes/shot_by_image_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import torch
3 |
4 | from .common import postprocess_image, preprocess_image, image_to_base64
5 |
6 | class ShotByImageNode():
7 | @classmethod
8 | def INPUT_TYPES(self):
9 | return {
10 | "required": {
11 | "image": ("IMAGE",), # Input image from another node
12 | "ref_image": ("IMAGE",), # ref image from another node
13 | "enhance_ref_image": ("INT", {"default": 1}),
14 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}) # API Key input with a default value
15 | },
16 | "optional": {
17 | "content_moderation": ("BOOLEAN", {"default": False}),
18 | }
19 | }
20 |
21 | RETURN_TYPES = ("IMAGE",)
22 | RETURN_NAMES = ("output_image",)
23 | CATEGORY = "API Nodes"
24 | FUNCTION = "execute" # This is the method that will be executed
25 |
26 | def __init__(self):
27 | self.api_url = "https://engine.prod.bria-api.com/v1/product/lifestyle_shot_by_image" # Eraser API URL
28 |
29 | # Define the execute method as expected by ComfyUI
30 | def execute(self, image, ref_image, api_key, enhance_ref_image, content_moderation):
31 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
32 | raise Exception("Please insert a valid API key.")
33 |
34 | # Check if image and mask are tensors, if so, convert to NumPy arrays
35 | if isinstance(image, torch.Tensor):
36 | image = preprocess_image(image)
37 | if isinstance(ref_image, torch.Tensor):
38 | ref_image = preprocess_image(ref_image)
39 |
40 | # Convert the image and mask directly to Base64 strings
41 | image_base64 = image_to_base64(image)
42 | ref_image_base64 = image_to_base64(ref_image)
43 | enhance_ref_image = bool(enhance_ref_image)
44 |
45 | payload = {
46 | "file": image_base64,
47 | "ref_image_file": ref_image_base64,
48 | "enhance_ref_image": enhance_ref_image,
49 | "placement_type": "original",
50 | "original_quality": True,
51 | "sync": True,
52 | "content_moderation": content_moderation
53 | }
54 | headers = {
55 | "Content-Type": "application/json",
56 | "api_token": f"{api_key}"
57 | }
58 | try:
59 | response = requests.post(self.api_url, json=payload, headers=headers)
60 | # Check for successful response
61 | if response.status_code == 200:
62 | print('response is 200')
63 | # Process the output image from API response
64 | response_dict = response.json()
65 | image_response = requests.get(response_dict['result'][0][0])
66 | result_image = postprocess_image(image_response.content)
67 | return (result_image,)
68 | else:
69 | raise Exception(f"Error: API request failed with status code {response.status_code}")
70 |
71 | except Exception as e:
72 | raise Exception(f"{e}")
73 |
--------------------------------------------------------------------------------
/nodes/shot_by_text_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import torch
3 |
4 | from .common import postprocess_image, preprocess_image, image_to_base64
5 |
6 | class ShotByTextNode():
7 | @classmethod
8 | def INPUT_TYPES(self):
9 | return {
10 | "required": {
11 | "image": ("IMAGE",), # Input image from another node
12 | "scene_description": ("STRING",),
13 | "optimize_description": ("INT", {"default": 1}),
14 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}) # API Key input with a default value
15 | },
16 | "optional": {
17 | "content_moderation": ("BOOLEAN", {"default": False}),
18 | }
19 | }
20 |
21 | RETURN_TYPES = ("IMAGE",)
22 | RETURN_NAMES = ("output_image",)
23 | CATEGORY = "API Nodes"
24 | FUNCTION = "execute" # This is the method that will be executed
25 |
26 | def __init__(self):
27 | self.api_url = "https://engine.prod.bria-api.com/v1/product/lifestyle_shot_by_text" # Eraser API URL
28 |
29 | # Define the execute method as expected by ComfyUI
30 | def execute(self, image, api_key, scene_description, optimize_description, content_moderation):
31 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
32 | raise Exception("Please insert a valid API key.")
33 |
34 | # Check if image and mask are tensors, if so, convert to NumPy arrays
35 | if isinstance(image, torch.Tensor):
36 | image = preprocess_image(image)
37 |
38 | optimize_description = bool(optimize_description)
39 | image_base64 = image_to_base64(image)
40 | payload = {
41 | "file": image_base64,
42 | "scene_description": scene_description,
43 | "optimize_description": optimize_description,
44 | "placement_type": "original",
45 | "original_quality": True,
46 | "sync": True,
47 | "content_moderation": content_moderation
48 |
49 | }
50 | headers = {
51 | "Content-Type": "application/json",
52 | "api_token": f"{api_key}"
53 | }
54 | try:
55 | response = requests.post(self.api_url, json=payload, headers=headers)
56 | # Check for successful response
57 | if response.status_code == 200:
58 | print('response is 200')
59 | # Process the output image from API response
60 | response_dict = response.json()
61 | image_response = requests.get(response_dict['result'][0][0])
62 | result_image = postprocess_image(image_response.content)
63 | return (result_image,)
64 | else:
65 | raise Exception(f"Error: API request failed with status code {response.status_code}")
66 |
67 | except Exception as e:
68 | raise Exception(f"{e}")
69 |
70 |
--------------------------------------------------------------------------------
/nodes/tailored_gen_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from .common import postprocess_image, preprocess_image, image_to_base64
4 |
5 |
6 | class TailoredGenNode():
7 | @classmethod
8 | def INPUT_TYPES(self):
9 | return {
10 | "required": {
11 | "model_id": ("STRING",),
12 | "api_key": ("STRING", ),
13 | },
14 | "optional": {
15 | "prompt": ("STRING",),
16 | "generation_prefix": ("STRING",), # possibly get this from the tailored model info node
17 | "aspect_ratio": (["1:1", "2:3", "3:2", "3:4", "4:3", "4:5", "5:4", "9:16", "16:9"], {"default": "4:3"}),
18 | "seed": ("INT", {"default": -1}),
19 | "model_influence": ("FLOAT", {"default": 1.0}),
20 | "negative_prompt": ("STRING", {"default": ""}),
21 | "fast": ("INT", {"default": 1}), # possibly get this from the tailored model info node
22 | "steps_num": ("INT", {"default": 8}), # possibly get this from the tailored model info node
23 | "guidance_method_1": (["controlnet_canny", "controlnet_depth", "controlnet_recoloring", "controlnet_color_grid"], {"default": "controlnet_canny"}),
24 | "guidance_method_1_scale": ("FLOAT", {"default": 1.0}),
25 | "guidance_method_1_image": ("IMAGE", ),
26 | "guidance_method_2": (["controlnet_canny", "controlnet_depth", "controlnet_recoloring", "controlnet_color_grid"], {"default": "controlnet_canny"}),
27 | "guidance_method_2_scale": ("FLOAT", {"default": 1.0}),
28 | "guidance_method_2_image": ("IMAGE", ),
29 | "content_moderation": ("INT", {"default": 0}),
30 | }
31 | }
32 |
33 | RETURN_TYPES = ("IMAGE",)
34 | RETURN_NAMES = ("output_image",)
35 | CATEGORY = "API Nodes"
36 | FUNCTION = "execute" # This is the method that will be executed
37 |
38 | def __init__(self):
39 | self.api_url = "https://engine.prod.bria-api.com/v1/text-to-image/tailored/" #"http://0.0.0.0:5000/v1/text-to-image/tailored/"
40 |
41 | def execute(
42 | self, model_id, api_key, prompt, generation_prefix, aspect_ratio,
43 | seed, model_influence, negative_prompt, fast, steps_num,
44 | guidance_method_1=None, guidance_method_1_scale=None, guidance_method_1_image=None,
45 | guidance_method_2=None, guidance_method_2_scale=None, guidance_method_2_image=None,
46 | content_moderation=0,
47 | ):
48 | payload = {
49 | "prompt": generation_prefix + prompt,
50 | "num_results": 1,
51 | "aspect_ratio": aspect_ratio,
52 | "sync": True,
53 | "seed": seed,
54 | "model_influence": model_influence,
55 | "negative_prompt": negative_prompt,
56 | "fast": fast,
57 | "steps_num": steps_num,
58 | "include_generation_prefix": False,
59 | "content_moderation": content_moderation,
60 | }
61 | if guidance_method_1_image is not None:
62 | guidance_method_1_image = preprocess_image(guidance_method_1_image)
63 | guidance_method_1_image = image_to_base64(guidance_method_1_image)
64 | payload["guidance_method_1"] = guidance_method_1
65 | payload["guidance_method_1_scale"] = guidance_method_1_scale
66 | payload["guidance_method_1_image_file"] = guidance_method_1_image
67 | if guidance_method_2_image is not None:
68 | guidance_method_2_image = preprocess_image(guidance_method_2_image)
69 | guidance_method_2_image = image_to_base64(guidance_method_2_image)
70 | payload["guidance_method_2"] = guidance_method_2
71 | payload["guidance_method_2_scale"] = guidance_method_2_scale
72 | payload["guidance_method_2_image_file"] = guidance_method_2_image
73 | response = requests.post(
74 | self.api_url + model_id,
75 | json=payload,
76 | headers={"api_token": api_key}
77 | )
78 | if response.status_code == 200:
79 | response_dict = response.json()
80 | image_response = requests.get(response_dict['result'][0]["urls"][0])
81 | result_image = postprocess_image(image_response.content)
82 | return (result_image,)
83 | else:
84 | raise Exception(f"Error: API request failed with status code {response.status_code} and text {response.text}")
85 |
--------------------------------------------------------------------------------
/nodes/tailored_model_info_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 |
4 | class TailoredModelInfoNode():
5 | @classmethod
6 | def INPUT_TYPES(self):
7 | return {
8 | "required": {
9 | "model_id": ("STRING",),
10 | "api_key": ("STRING", )
11 | }
12 | }
13 |
14 | RETURN_TYPES = ("STRING", "STRING","INT", "INT", )
15 | RETURN_NAMES = ("generation_prefix", "model_id", "default_fast", "default_steps_num", )
16 | CATEGORY = "API Nodes"
17 | FUNCTION = "execute" # This is the method that will be executed
18 |
19 | def __init__(self):
20 | self.api_url = "https://engine.prod.bria-api.com/v1/tailored-gen/models/"
21 |
22 | # Define the execute method as expected by ComfyUI
23 | def execute(self, model_id, api_key):
24 | response = requests.get(
25 | self.api_url + model_id,
26 | headers={"api_token": api_key}
27 | )
28 | if response.status_code == 200:
29 | generation_prefix = response.json()["generation_prefix"]
30 | training_version = response.json()["training_version"]
31 | default_fast = 1 if training_version == "light" else 0
32 | default_steps_num = 8 if training_version == "light" else 30
33 | return (generation_prefix, model_id, default_fast, default_steps_num,)
34 | else:
35 | raise Exception(f"Error: API request failed with status code {response.status_code} and text {response.text}")
36 |
--------------------------------------------------------------------------------
/nodes/tailored_portrait_node.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import requests
3 | from PIL import Image
4 | import io
5 | import torch
6 |
7 | from .common import image_to_base64, preprocess_image
8 |
9 | class TailoredPortraitNode():
10 | @classmethod
11 | def INPUT_TYPES(self):
12 | return {
13 | "required": {
14 | "image": ("IMAGE",), # Input image from another node
15 | "tailored_model_id": ("INT",),
16 | "api_key": ("STRING", {"default": "BRIA_API_TOKEN"}), # API Key input with a default value
17 | },
18 | "optional": {
19 | "seed": ("INT", {"default": 123456}),
20 | "tailored_model_influence": ("FLOAT", {"default": 0.9}),
21 | "id_strength": ("FLOAT", {"default": 0.7}),
22 | }
23 | }
24 |
25 | RETURN_TYPES = ("IMAGE",)
26 | RETURN_NAMES = ("output_image",)
27 | CATEGORY = "API Nodes"
28 | FUNCTION = "execute" # This is the method that will be executed
29 |
30 | def __init__(self):
31 | self.api_url = "https://engine.prod.bria-api.com/v1/tailored-gen/restyle_portrait" # Eraser API URL
32 |
33 | # Define the execute method as expected by ComfyUI
34 | def execute(self, image, tailored_model_id, api_key, seed, tailored_model_influence, id_strength):
35 | if api_key.strip() == "" or api_key.strip() == "BRIA_API_TOKEN":
36 | raise Exception("Please insert a valid API key.")
37 |
38 | # Convert the image and mask directly to if isinstance(image, torch.Tensor):
39 | if isinstance(image, torch.Tensor):
40 | image = preprocess_image(image)
41 |
42 | image_base64 = image_to_base64(image)
43 |
44 | # Prepare the API request payload
45 | payload = {
46 | "id_image_file": f"{image_base64}",
47 | "tailored_model_id": tailored_model_id,
48 | "tailored_model_influence": tailored_model_influence,
49 | "id_strength": id_strength,
50 | "seed": seed
51 | }
52 |
53 | headers = {
54 | "Content-Type": "application/json",
55 | "api_token": f"{api_key}"
56 | }
57 |
58 | try:
59 | response = requests.post(self.api_url, json=payload, headers=headers)
60 | # Check for successful response
61 | if response.status_code == 200:
62 | print('response is 200')
63 | # Process the output image from API response
64 | response_dict = response.json()
65 | image_response = requests.get(response_dict['image_res'])
66 | result_image = Image.open(io.BytesIO(image_response.content))
67 | result_image = result_image.convert("RGB")
68 | result_image = np.array(result_image).astype(np.float32) / 255.0
69 | result_image = torch.from_numpy(result_image)[None,]
70 | return (result_image,)
71 | else:
72 | raise Exception(f"Error: API request failed with status code {response.status_code}")
73 |
74 | except Exception as e:
75 | raise Exception(f"{e}")
76 |
--------------------------------------------------------------------------------
/nodes/text_2_image_base_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from .common import postprocess_image, preprocess_image, image_to_base64
4 |
5 |
6 | class Text2ImageBaseNode():
7 | @classmethod
8 | def INPUT_TYPES(self):
9 | return {
10 | "required": {
11 | "api_key": ("STRING", ),
12 | },
13 | "optional": {
14 | "prompt": ("STRING",),
15 | "aspect_ratio": (["1:1", "2:3", "3:2", "3:4", "4:3", "4:5", "5:4", "9:16", "16:9"], {"default": "4:3"}),
16 | "seed": ("INT", {"default": -1}),
17 | "negative_prompt": ("STRING", {"default": ""}),
18 | "steps_num": ("INT", {"default": 30}),
19 | "prompt_enhancement": ("INT", {"default": 0}),
20 | "text_guidance_scale": ("INT", {"default": 5}),
21 | "medium": (["photography", "art", "none"], {"default": "none"}),
22 | "guidance_method_1": (["controlnet_canny", "controlnet_depth", "controlnet_recoloring", "controlnet_color_grid"], {"default": "controlnet_canny"}),
23 | "guidance_method_1_scale": ("FLOAT", {"default": 1.0}),
24 | "guidance_method_1_image": ("IMAGE", ),
25 | "guidance_method_2": (["controlnet_canny", "controlnet_depth", "controlnet_recoloring", "controlnet_color_grid"], {"default": "controlnet_canny"}),
26 | "guidance_method_2_scale": ("FLOAT", {"default": 1.0}),
27 | "guidance_method_2_image": ("IMAGE", ),
28 | "image_prompt_mode": (["regular", "style_only"], {"default": "regular"}),
29 | "image_prompt_image": ("IMAGE", ),
30 | "image_prompt_scale": ("FLOAT", {"default": 1.0}),
31 | "content_moderation": ("INT", {"default": 0}),
32 | }
33 | }
34 |
35 | RETURN_TYPES = ("IMAGE",)
36 | RETURN_NAMES = ("output_image",)
37 | CATEGORY = "API Nodes"
38 | FUNCTION = "execute" # This is the method that will be executed
39 |
40 | def __init__(self):
41 | self.api_url = "https://engine.prod.bria-api.com/v1/text-to-image/base/2.3" #"http://0.0.0.0:5000/v1/text-to-image/base/2.3"
42 |
43 | def execute(
44 | self, api_key, prompt, aspect_ratio, seed, negative_prompt,
45 | steps_num, prompt_enhancement, text_guidance_scale, medium,
46 | guidance_method_1=None, guidance_method_1_scale=None, guidance_method_1_image=None,
47 | guidance_method_2=None, guidance_method_2_scale=None, guidance_method_2_image=None,
48 | image_prompt_mode=None, image_prompt_image=None, image_prompt_scale=None,
49 | content_moderation=0,
50 | ):
51 | payload = {
52 | "prompt": prompt,
53 | "num_results": 1,
54 | "aspect_ratio": aspect_ratio,
55 | "sync": True,
56 | "seed": seed,
57 | "negative_prompt": negative_prompt,
58 | "steps_num": steps_num,
59 | "text_guidance_scale": text_guidance_scale,
60 | "prompt_enhancement": prompt_enhancement,
61 | "content_moderation": content_moderation,
62 | }
63 | if medium != "none":
64 | payload["medium"] = medium
65 | if guidance_method_1_image is not None:
66 | guidance_method_1_image = preprocess_image(guidance_method_1_image)
67 | guidance_method_1_image = image_to_base64(guidance_method_1_image)
68 | payload["guidance_method_1"] = guidance_method_1
69 | payload["guidance_method_1_scale"] = guidance_method_1_scale
70 | payload["guidance_method_1_image_file"] = guidance_method_1_image
71 | if guidance_method_2_image is not None:
72 | guidance_method_2_image = preprocess_image(guidance_method_2_image)
73 | guidance_method_2_image = image_to_base64(guidance_method_2_image)
74 | payload["guidance_method_2"] = guidance_method_2
75 | payload["guidance_method_2_scale"] = guidance_method_2_scale
76 | payload["guidance_method_2_image_file"] = guidance_method_2_image
77 | if image_prompt_image is not None:
78 | image_prompt_image = preprocess_image(image_prompt_image)
79 | image_prompt_image = image_to_base64(image_prompt_image)
80 | payload["image_prompt_mode"] = image_prompt_mode
81 | payload["image_prompt_file"] = image_prompt_image
82 | payload["image_prompt_scale"] = image_prompt_scale
83 | response = requests.post(
84 | self.api_url,
85 | json=payload,
86 | headers={"api_token": api_key}
87 | )
88 | if response.status_code == 200:
89 | response_dict = response.json()
90 | image_response = requests.get(response_dict['result'][0]["urls"][0])
91 | result_image = postprocess_image(image_response.content)
92 | return (result_image,)
93 | else:
94 | raise Exception(f"Error: API request failed with status code {response.status_code} and text {response.text}")
95 |
--------------------------------------------------------------------------------
/nodes/text_2_image_fast_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from .common import postprocess_image, preprocess_image, image_to_base64
4 |
5 |
6 | class Text2ImageFastNode():
7 | @classmethod
8 | def INPUT_TYPES(self):
9 | return {
10 | "required": {
11 | "api_key": ("STRING", ),
12 | },
13 | "optional": {
14 | "prompt": ("STRING",),
15 | "aspect_ratio": (["1:1", "2:3", "3:2", "3:4", "4:3", "4:5", "5:4", "9:16", "16:9"], {"default": "4:3"}),
16 | "seed": ("INT", {"default": -1}),
17 | "steps_num": ("INT", {"default": 8}),
18 | "prompt_enhancement": ("INT", {"default": 0}),
19 | "guidance_method_1": (["controlnet_canny", "controlnet_depth", "controlnet_recoloring", "controlnet_color_grid"], {"default": "controlnet_canny"}),
20 | "guidance_method_1_scale": ("FLOAT", {"default": 1.0}),
21 | "guidance_method_1_image": ("IMAGE", ),
22 | "guidance_method_2": (["controlnet_canny", "controlnet_depth", "controlnet_recoloring", "controlnet_color_grid"], {"default": "controlnet_canny"}),
23 | "guidance_method_2_scale": ("FLOAT", {"default": 1.0}),
24 | "guidance_method_2_image": ("IMAGE", ),
25 | "image_prompt_mode": (["regular", "style_only"], {"default": "regular"}),
26 | "image_prompt_image": ("IMAGE", ),
27 | "image_prompt_scale": ("FLOAT", {"default": 1.0}),
28 | "content_moderation": ("INT", {"default": 0}),
29 | }
30 | }
31 |
32 | RETURN_TYPES = ("IMAGE",)
33 | RETURN_NAMES = ("output_image",)
34 | CATEGORY = "API Nodes"
35 | FUNCTION = "execute"
36 |
37 | def __init__(self):
38 | self.api_url = "https://engine.prod.bria-api.com/v1/text-to-image/fast/2.3" #"http://0.0.0.0:5000/v1/text-to-image/fast/2.3"
39 |
40 | def execute(
41 | self, api_key, prompt, aspect_ratio, seed,
42 | steps_num, prompt_enhancement,
43 | guidance_method_1=None, guidance_method_1_scale=None, guidance_method_1_image=None,
44 | guidance_method_2=None, guidance_method_2_scale=None, guidance_method_2_image=None,
45 | image_prompt_mode=None, image_prompt_image=None, image_prompt_scale=None,
46 | content_moderation=0,
47 | ):
48 | payload = {
49 | "prompt": prompt,
50 | "num_results": 1,
51 | "aspect_ratio": aspect_ratio,
52 | "sync": True,
53 | "seed": seed,
54 | "steps_num": steps_num,
55 | "prompt_enhancement": prompt_enhancement,
56 | "content_moderation": content_moderation,
57 | }
58 | if guidance_method_1_image is not None:
59 | guidance_method_1_image = preprocess_image(guidance_method_1_image)
60 | guidance_method_1_image = image_to_base64(guidance_method_1_image)
61 | payload["guidance_method_1"] = guidance_method_1
62 | payload["guidance_method_1_scale"] = guidance_method_1_scale
63 | payload["guidance_method_1_image_file"] = guidance_method_1_image
64 | if guidance_method_2_image is not None:
65 | guidance_method_2_image = preprocess_image(guidance_method_2_image)
66 | guidance_method_2_image = image_to_base64(guidance_method_2_image)
67 | payload["guidance_method_2"] = guidance_method_2
68 | payload["guidance_method_2_scale"] = guidance_method_2_scale
69 | payload["guidance_method_2_image_file"] = guidance_method_2_image
70 | if image_prompt_image is not None:
71 | image_prompt_image = preprocess_image(image_prompt_image)
72 | image_prompt_image = image_to_base64(image_prompt_image)
73 | payload["image_prompt_mode"] = image_prompt_mode
74 | payload["image_prompt_file"] = image_prompt_image
75 | payload["image_prompt_scale"] = image_prompt_scale
76 | response = requests.post(
77 | self.api_url,
78 | json=payload,
79 | headers={"api_token": api_key}
80 | )
81 | if response.status_code == 200:
82 | response_dict = response.json()
83 | image_response = requests.get(response_dict['result'][0]["urls"][0])
84 | result_image = postprocess_image(image_response.content)
85 | return (result_image,)
86 | else:
87 | raise Exception(f"Error: API request failed with status code {response.status_code} and text {response.text}")
88 |
--------------------------------------------------------------------------------
/nodes/text_2_image_hd_node.py:
--------------------------------------------------------------------------------
1 | import requests
2 |
3 | from .common import postprocess_image
4 |
5 |
6 | class Text2ImageHDNode():
7 | @classmethod
8 | def INPUT_TYPES(self):
9 | return {
10 | "required": {
11 | "api_key": ("STRING", ),
12 | },
13 | "optional": {
14 | "prompt": ("STRING",),
15 | "aspect_ratio": (["1:1", "2:3", "3:2", "3:4", "4:3", "4:5", "5:4", "9:16", "16:9"], {"default": "4:3"}),
16 | "seed": ("INT", {"default": -1}),
17 | "negative_prompt": ("STRING", {"default": ""}),
18 | "steps_num": ("INT", {"default": 30}),
19 | "prompt_enhancement": ("INT", {"default": 0}),
20 | "text_guidance_scale": ("INT", {"default": 5}),
21 | "medium": (["photography", "art", "none"], {"default": "none"}),
22 | "content_moderation": ("INT", {"default": 0}),
23 | }
24 | }
25 |
26 | RETURN_TYPES = ("IMAGE",)
27 | RETURN_NAMES = ("output_image",)
28 | CATEGORY = "API Nodes"
29 | FUNCTION = "execute"
30 |
31 | def __init__(self):
32 | self.api_url = "https://engine.prod.bria-api.com/v1/text-to-image/hd/2.3" #"http://0.0.0.0:5000/v1/text-to-image/hd/2.3"
33 |
34 | def execute(
35 | self, api_key, prompt, aspect_ratio, seed, negative_prompt,
36 | steps_num, prompt_enhancement, text_guidance_scale, medium, content_moderation=0,
37 | ):
38 | payload = {
39 | "prompt": prompt,
40 | "num_results": 1,
41 | "aspect_ratio": aspect_ratio,
42 | "sync": True,
43 | "seed": seed,
44 | "negative_prompt": negative_prompt,
45 | "steps_num": steps_num,
46 | "text_guidance_scale": text_guidance_scale,
47 | "prompt_enhancement": prompt_enhancement,
48 | "content_moderation": content_moderation,
49 | }
50 | if medium != "none":
51 | payload["medium"] = medium
52 | response = requests.post(
53 | self.api_url,
54 | json=payload,
55 | headers={"api_token": api_key}
56 | )
57 | if response.status_code == 200:
58 | response_dict = response.json()
59 | image_response = requests.get(response_dict['result'][0]["urls"][0])
60 | result_image = postprocess_image(image_response.content)
61 | return (result_image,)
62 | else:
63 | raise Exception(f"Error: API request failed with status code {response.status_code} and text {response.text}")
64 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "comfyui-bria-api"
3 | description = "Custom nodes for ComfyUI using BRIA's API."
4 | version = "2.0.5"
5 | license = {file = "LICENSE"}
6 |
7 | [project.urls]
8 | Repository = "https://github.com/Bria-AI/ComfyUI-BRIA-API"
9 | # Used by Comfy Registry https://comfyregistry.org
10 |
11 | [tool.comfy]
12 | PublisherId = "briaai"
13 | DisplayName = "ComfyUI-BRIA-API"
14 | Icon = ""
15 |
--------------------------------------------------------------------------------
/workflows/background_generation_workflow.json:
--------------------------------------------------------------------------------
1 | {"last_node_id":39,"last_link_id":65,"nodes":[{"id":34,"type":"LoadImage","pos":[19.94045066833496,1075.806640625],"size":[315,314],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[61],"slot_index":0,"localized_name":"IMAGE"},{"name":"MASK","type":"MASK","links":null,"localized_name":"MASK"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["quirky-red-brick-brick-wallpaper.jpg","image"]},{"id":36,"type":"PreviewImage","pos":[468.7108154296875,1210.288818359375],"size":[210,246],"flags":{},"order":5,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":63,"localized_name":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":33,"type":"PreviewImage","pos":[1317.4083251953125,1118.4864501953125],"size":[210,246],"flags":{},"order":8,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":59,"localized_name":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":31,"type":"PreviewImage","pos":[1668.734619140625,796.7755737304688],"size":[210,246],"flags":{},"order":10,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":57,"localized_name":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":29,"type":"PreviewImage","pos":[872.8858032226562,679.9429321289062],"size":[210,246],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":55,"localized_name":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":28,"type":"LoadImage","pos":[29.629886627197266,676.8370361328125],"size":[315,314],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[54],"slot_index":0,"localized_name":"IMAGE"},{"name":"MASK","type":"MASK","links":null,"localized_name":"MASK"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["pexels-photo-1808399.jpeg","image"]},{"id":35,"type":"RemoveForegroundNode","pos":[414.2638854980469,1079.1705322265625],"size":[315,58],"flags":{},"order":3,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":61,"localized_name":"image"}],"outputs":[{"name":"output_image","type":"IMAGE","links":[62,63],"slot_index":0,"localized_name":"output_image"}],"properties":{"Node name for S&R":"RemoveForegroundNode"},"widgets_values":["BRIA_API_TOKEN"]},{"id":32,"type":"ReplaceBgNode","pos":[834.6115112304688,1063.0406494140625],"size":[315,294],"flags":{},"order":7,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":65,"localized_name":"image"},{"name":"ref_image","type":"IMAGE","link":62,"shape":7,"localized_name":"ref_image"}],"outputs":[{"name":"output_image","type":"IMAGE","links":[59,64],"slot_index":0,"localized_name":"output_image"}],"properties":{"Node name for S&R":"ReplaceBgNode"},"widgets_values":["BRIA_API_TOKEN",false,"",true,true,true,false,"",1978,"randomize"]},{"id":30,"type":"ImageExpansionNode","pos":[1265.001220703125,798.8323974609375],"size":[315,226],"flags":{},"order":9,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":64,"localized_name":"image"}],"outputs":[{"name":"output_image","type":"IMAGE","links":[57],"slot_index":0,"localized_name":"output_image"}],"properties":{"Node name for S&R":"ImageExpansionNode"},"widgets_values":["600,760","200, 0","BRIA_API_TOKEN","1200, 800","",1729,"randomize","Ugly, mutated"]},{"id":38,"type":"Note","pos":[436.7110900878906,566.2047119140625],"size":[306.28387451171875,58],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["You can get your BRIA API token at:\nhttps://bria.ai/api/"],"color":"#432","bgcolor":"#653"},{"id":27,"type":"RmbgNode","pos":[430.8815002441406,678.7727661132812],"size":[315,58],"flags":{},"order":4,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":54,"localized_name":"image"}],"outputs":[{"name":"output_image","type":"IMAGE","links":[55,65],"slot_index":0,"localized_name":"output_image"}],"properties":{"Node name for S&R":"RmbgNode"},"widgets_values":["BRIA_API_TOKEN"]}],"links":[[51,5,0,15,2,"STRING"],[54,28,0,27,0,"IMAGE"],[55,27,0,29,0,"IMAGE"],[57,30,0,31,0,"IMAGE"],[59,32,0,33,0,"IMAGE"],[61,34,0,35,0,"IMAGE"],[62,35,0,32,1,"IMAGE"],[63,35,0,36,0,"IMAGE"],[64,32,0,30,0,"IMAGE"],[65,27,0,32,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8140274938684037,"offset":[101.53311990208498,-477.0342684311694]},"node_versions":{"comfyui-bria-api":"c72754d15b53a13ee0c0419d70401232c56b7fdb","comfy-core":"v0.3.8-1-gc441048","ComfyUI-Jjk-Nodes":"b3c99bb78a99551776b5eab1a820e1cd58f84f31"}},"version":0.4}
--------------------------------------------------------------------------------
/workflows/eraser_genfill_workflow.json:
--------------------------------------------------------------------------------
1 | {"last_node_id":41,"last_link_id":62,"nodes":[{"id":14,"type":"Note","pos":[478,444],"size":[396.80859375,61.8046875],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["Right click, and choose \"Open in Mask Editor\" to draw a mask of areas you want to erase."],"color":"#432","bgcolor":"#653"},{"id":15,"type":"Note","pos":[1080.3062744140625,445.9654541015625],"size":[306.28387451171875,58],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["You can get your BRIA API token at:\nhttps://bria.ai/api/"],"color":"#432","bgcolor":"#653"},{"id":30,"type":"LoadImage","pos":[479,572],"size":[395.7845153808594,352.8512268066406],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[56],"slot_index":0,"shape":3,"localized_name":"IMAGE"},{"name":"MASK","type":"MASK","links":[57],"slot_index":1,"shape":3,"localized_name":"MASK"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["clipspace/clipspace-mask-4068974.800000012.png [input]","image"]},{"id":37,"type":"PreviewImage","pos":[1504.4755859375,568.9967651367188],"size":[438.50262451171875,376.8338317871094],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":58,"localized_name":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":33,"type":"PreviewImage","pos":[1502.785888671875,1078.3564453125],"size":[433.29193115234375,357.1255187988281],"flags":{},"order":7,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":54,"localized_name":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":40,"type":"LoadImage","pos":[541.1226806640625,1079.39697265625],"size":[315,314],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[61],"slot_index":0,"localized_name":"IMAGE"},{"name":"MASK","type":"MASK","links":[62],"slot_index":1,"localized_name":"MASK"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["clipspace/clipspace-mask-4411367.100000024.png [input]","image"]},{"id":34,"type":"BriaGenFill","pos":[1032.416748046875,1073.984619140625],"size":[315,102],"flags":{},"order":5,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":61,"localized_name":"image"},{"name":"mask","type":"MASK","link":62,"localized_name":"mask"}],"outputs":[{"name":"output_image","type":"IMAGE","links":[54],"slot_index":0,"shape":3,"localized_name":"output_image"}],"properties":{"Node name for S&R":"BriaGenFill"},"widgets_values":["a blue coffee mug","BRIA_API_TOKEN"]},{"id":36,"type":"BriaEraser","pos":[1068.6063232421875,574.6080322265625],"size":[315,78],"flags":{},"order":4,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":56,"localized_name":"image"},{"name":"mask","type":"MASK","link":57,"localized_name":"mask"}],"outputs":[{"name":"output_image","type":"IMAGE","links":[58],"slot_index":0,"localized_name":"output_image"}],"properties":{"Node name for S&R":"BriaEraser"},"widgets_values":["BRIA_API_TOKEN"]}],"links":[[54,34,0,33,0,"IMAGE"],[56,30,0,36,0,"IMAGE"],[57,30,1,36,1,"MASK"],[58,36,0,37,0,"IMAGE"],[61,40,0,34,0,"IMAGE"],[62,40,1,34,1,"MASK"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.6727499949325677,"offset":[131.53042816003972,-419.53430403204317]}},"version":0.4}
--------------------------------------------------------------------------------
/workflows/product shot generation_workflow.json:
--------------------------------------------------------------------------------
1 | {
2 | "last_node_id": 42,
3 | "last_link_id": 65,
4 | "nodes": [
5 | {
6 | "id": 42,
7 | "type": "LoadImage",
8 | "pos": {
9 | "0": 591,
10 | "1": 593
11 | },
12 | "size": {
13 | "0": 315,
14 | "1": 314
15 | },
16 | "flags": {},
17 | "order": 0,
18 | "mode": 0,
19 | "inputs": [],
20 | "outputs": [
21 | {
22 | "name": "IMAGE",
23 | "type": "IMAGE",
24 | "links": [
25 | 64,
26 | 65
27 | ],
28 | "slot_index": 0
29 | },
30 | {
31 | "name": "MASK",
32 | "type": "MASK",
33 | "links": null
34 | }
35 | ],
36 | "properties": {
37 | "Node name for S&R": "LoadImage"
38 | },
39 | "widgets_values": [
40 | "A_bottle_of_perfume.png",
41 | "image"
42 | ]
43 | },
44 | {
45 | "id": 39,
46 | "type": "LoadImage",
47 | "pos": {
48 | "0": 600,
49 | "1": 988
50 | },
51 | "size": {
52 | "0": 315,
53 | "1": 314
54 | },
55 | "flags": {},
56 | "order": 1,
57 | "mode": 0,
58 | "inputs": [],
59 | "outputs": [
60 | {
61 | "name": "IMAGE",
62 | "type": "IMAGE",
63 | "links": [
64 | 59
65 | ],
66 | "slot_index": 0
67 | },
68 | {
69 | "name": "MASK",
70 | "type": "MASK",
71 | "links": null
72 | }
73 | ],
74 | "properties": {
75 | "Node name for S&R": "LoadImage"
76 | },
77 | "widgets_values": [
78 | "A_red_studio_with_a_shelf__close_up.png",
79 | "image"
80 | ]
81 | },
82 | {
83 | "id": 15,
84 | "type": "Note",
85 | "pos": {
86 | "0": 995.4524536132812,
87 | "1": 601.5353393554688
88 | },
89 | "size": {
90 | "0": 306.28387451171875,
91 | "1": 58
92 | },
93 | "flags": {},
94 | "order": 2,
95 | "mode": 0,
96 | "inputs": [],
97 | "outputs": [],
98 | "properties": {},
99 | "widgets_values": [
100 | "You can get your BRIA API token at:\nhttps://bria.ai/api/"
101 | ],
102 | "color": "#432",
103 | "bgcolor": "#653"
104 | },
105 | {
106 | "id": 40,
107 | "type": "PreviewImage",
108 | "pos": {
109 | "0": 1408,
110 | "1": 623
111 | },
112 | "size": [
113 | 210,
114 | 246
115 | ],
116 | "flags": {},
117 | "order": 5,
118 | "mode": 0,
119 | "inputs": [
120 | {
121 | "name": "images",
122 | "type": "IMAGE",
123 | "link": 62
124 | }
125 | ],
126 | "outputs": [],
127 | "properties": {
128 | "Node name for S&R": "PreviewImage"
129 | }
130 | },
131 | {
132 | "id": 41,
133 | "type": "PreviewImage",
134 | "pos": {
135 | "0": 1408,
136 | "1": 951
137 | },
138 | "size": [
139 | 210,
140 | 246
141 | ],
142 | "flags": {},
143 | "order": 6,
144 | "mode": 0,
145 | "inputs": [
146 | {
147 | "name": "images",
148 | "type": "IMAGE",
149 | "link": 63
150 | }
151 | ],
152 | "outputs": [],
153 | "properties": {
154 | "Node name for S&R": "PreviewImage"
155 | }
156 | },
157 | {
158 | "id": 36,
159 | "type": "ShotByTextNode",
160 | "pos": {
161 | "0": 996,
162 | "1": 736
163 | },
164 | "size": {
165 | "0": 315,
166 | "1": 106
167 | },
168 | "flags": {},
169 | "order": 3,
170 | "mode": 0,
171 | "inputs": [
172 | {
173 | "name": "image",
174 | "type": "IMAGE",
175 | "link": 64
176 | }
177 | ],
178 | "outputs": [
179 | {
180 | "name": "output_image",
181 | "type": "IMAGE",
182 | "links": [
183 | 62
184 | ],
185 | "slot_index": 0
186 | }
187 | ],
188 | "properties": {
189 | "Node name for S&R": "ShotByTextNode"
190 | },
191 | "widgets_values": [
192 | "a beautiful sunset",
193 | 1,
194 | ""
195 | ]
196 | },
197 | {
198 | "id": 37,
199 | "type": "ShotByImageNode",
200 | "pos": {
201 | "0": 999,
202 | "1": 932
203 | },
204 | "size": {
205 | "0": 315,
206 | "1": 102
207 | },
208 | "flags": {},
209 | "order": 4,
210 | "mode": 0,
211 | "inputs": [
212 | {
213 | "name": "image",
214 | "type": "IMAGE",
215 | "link": 65
216 | },
217 | {
218 | "name": "ref_image",
219 | "type": "IMAGE",
220 | "link": 59
221 | }
222 | ],
223 | "outputs": [
224 | {
225 | "name": "output_image",
226 | "type": "IMAGE",
227 | "links": [
228 | 63
229 | ],
230 | "slot_index": 0
231 | }
232 | ],
233 | "properties": {
234 | "Node name for S&R": "ShotByImageNode"
235 | },
236 | "widgets_values": [
237 | 0,
238 | ""
239 | ]
240 | }
241 | ],
242 | "links": [
243 | [
244 | 59,
245 | 39,
246 | 0,
247 | 37,
248 | 1,
249 | "IMAGE"
250 | ],
251 | [
252 | 62,
253 | 36,
254 | 0,
255 | 40,
256 | 0,
257 | "IMAGE"
258 | ],
259 | [
260 | 63,
261 | 37,
262 | 0,
263 | 41,
264 | 0,
265 | "IMAGE"
266 | ],
267 | [
268 | 64,
269 | 42,
270 | 0,
271 | 36,
272 | 0,
273 | "IMAGE"
274 | ],
275 | [
276 | 65,
277 | 42,
278 | 0,
279 | 37,
280 | 0,
281 | "IMAGE"
282 | ]
283 | ],
284 | "groups": [],
285 | "config": {},
286 | "extra": {
287 | "ds": {
288 | "scale": 0.9849732675807669,
289 | "offset": [
290 | -339.6686422794803,
291 | -496.4354678014682
292 | ]
293 | }
294 | },
295 | "version": 0.4
296 | }
297 |
--------------------------------------------------------------------------------
/workflows/tailored_workflow.json:
--------------------------------------------------------------------------------
1 | {"last_node_id":21,"last_link_id":49,"nodes":[{"id":2,"type":"TailoredModelInfoNode","pos":[480.2208557128906,641.4290161132812],"size":[315,122],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"generation_prefix","type":"STRING","links":[2],"slot_index":0,"localized_name":"generation_prefix"},{"name":"default_fast","type":"INT","links":[46],"slot_index":1,"localized_name":"default_fast"},{"name":"default_steps_num","type":"INT","links":[40],"slot_index":2,"localized_name":"default_steps_num"}],"properties":{"Node name for S&R":"TailoredModelInfoNode"},"widgets_values":["",""]},{"id":3,"type":"PreviewImage","pos":[1728.9140625,688.2759399414062],"size":[210,246],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":41,"localized_name":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":11,"type":"LoadImage","pos":[693.87060546875,831.6130981445312],"size":[315,314],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[48],"slot_index":0,"localized_name":"IMAGE"},{"name":"MASK","type":"MASK","links":null,"localized_name":"MASK"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["example.png","image"]},{"id":5,"type":"JjkShowText","pos":[847.2867431640625,591.890625],"size":[315,76],"flags":{},"order":4,"mode":0,"inputs":[{"name":"text","type":"STRING","link":2,"widget":{"name":"text"}}],"outputs":[{"name":"text","type":"STRING","links":[49],"slot_index":0,"shape":6,"localized_name":"text"}],"properties":{"Node name for S&R":"JjkShowText"},"widgets_values":["A photo of a character named Sami, a siamese cat with blue eyes, "]},{"id":15,"type":"BriaTailoredGen","pos":[1207.595947265625,645.6781005859375],"size":[456,438],"flags":{},"order":5,"mode":0,"inputs":[{"name":"guidance_method_1_image","type":"IMAGE","link":48,"shape":7,"localized_name":"guidance_method_1_image"},{"name":"guidance_method_2_image","type":"IMAGE","link":null,"shape":7,"localized_name":"guidance_method_2_image"},{"name":"generation_prefix","type":"STRING","link":49,"widget":{"name":"generation_prefix"},"shape":7},{"name":"fast","type":"INT","link":46,"widget":{"name":"fast"},"shape":7},{"name":"steps_num","type":"INT","link":40,"widget":{"name":"steps_num"},"shape":7}],"outputs":[{"name":"output_image","type":"IMAGE","links":[41],"slot_index":0,"localized_name":"output_image"}],"properties":{"Node name for S&R":"BriaTailoredGen"},"widgets_values":["","","a cat","","4:3",-1,"randomize",1,"","",1,"controlnet_canny",1,"controlnet_canny",1]},{"id":21,"type":"Note","pos":[1215.2469482421875,522.4407348632812],"size":[449.75360107421875,58],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["You can get your BRIA API token at: https://bria.ai/api/"],"color":"#432","bgcolor":"#653"},{"id":19,"type":"Note","pos":[484.5993957519531,486.4328918457031],"size":[306.0655212402344,89.87609100341797],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["This node is used to retrieve default settings and prompt prefixes for the chosen tailored model."],"color":"#432","bgcolor":"#653"}],"links":[[2,2,0,5,0,"STRING"],[40,2,2,15,4,"INT"],[41,15,0,3,0,"IMAGE"],[46,2,1,15,3,"INT"],[48,11,0,15,0,"IMAGE"],[49,5,0,15,2,"STRING"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7627768444385483,"offset":[-122.90473166350671,-300.2018923615813]},"node_versions":{"comfyui-bria-api":"c72754d15b53a13ee0c0419d70401232c56b7fdb","comfy-core":"v0.3.8-1-gc441048","ComfyUI-Jjk-Nodes":"b3c99bb78a99551776b5eab1a820e1cd58f84f31"}},"version":0.4}
--------------------------------------------------------------------------------
/workflows/text_to_image_workflow.json:
--------------------------------------------------------------------------------
1 | {
2 | "last_node_id": 13,
3 | "last_link_id": 11,
4 | "nodes": [
5 | {
6 | "id": 12,
7 | "type": "LoadImage",
8 | "pos": [
9 | 669.7035522460938,
10 | 136.97129821777344
11 | ],
12 | "size": [
13 | 315,
14 | 314
15 | ],
16 | "flags": {},
17 | "order": 0,
18 | "mode": 0,
19 | "inputs": [],
20 | "outputs": [
21 | {
22 | "name": "IMAGE",
23 | "type": "IMAGE",
24 | "links": [
25 | 10
26 | ],
27 | "slot_index": 0
28 | },
29 | {
30 | "name": "MASK",
31 | "type": "MASK",
32 | "links": null
33 | }
34 | ],
35 | "properties": {
36 | "Node name for S&R": "LoadImage"
37 | },
38 | "widgets_values": [
39 | "pexels-photo-3246665.png",
40 | "image"
41 | ]
42 | },
43 | {
44 | "id": 11,
45 | "type": "PreviewImage",
46 | "pos": [
47 | 1587.27001953125,
48 | 97.39167022705078
49 | ],
50 | "size": [
51 | 210,
52 | 246
53 | ],
54 | "flags": {},
55 | "order": 2,
56 | "mode": 0,
57 | "inputs": [
58 | {
59 | "name": "images",
60 | "type": "IMAGE",
61 | "link": 11
62 | }
63 | ],
64 | "outputs": [],
65 | "properties": {
66 | "Node name for S&R": "PreviewImage"
67 | }
68 | },
69 | {
70 | "id": 10,
71 | "type": "Text2ImageFastNode",
72 | "pos": [
73 | 1058.320556640625,
74 | 96.64427185058594
75 | ],
76 | "size": [
77 | 438.71258544921875,
78 | 394.27716064453125
79 | ],
80 | "flags": {},
81 | "order": 1,
82 | "mode": 0,
83 | "inputs": [
84 | {
85 | "name": "guidance_method_1_image",
86 | "type": "IMAGE",
87 | "link": null,
88 | "shape": 7
89 | },
90 | {
91 | "name": "guidance_method_2_image",
92 | "type": "IMAGE",
93 | "link": null,
94 | "shape": 7
95 | },
96 | {
97 | "name": "image_prompt_image",
98 | "type": "IMAGE",
99 | "link": 10,
100 | "shape": 7
101 | }
102 | ],
103 | "outputs": [
104 | {
105 | "name": "output_image",
106 | "type": "IMAGE",
107 | "links": [
108 | 11
109 | ],
110 | "slot_index": 0
111 | }
112 | ],
113 | "properties": {
114 | "Node name for S&R": "Text2ImageFastNode"
115 | },
116 | "widgets_values": [
117 | "BRIA_API_TOKEN",
118 | "A drawing of a lion on a table.\t",
119 | "4:3",
120 | 990,
121 | "randomize",
122 | 8,
123 | 0,
124 | "controlnet_canny",
125 | 1,
126 | "controlnet_canny",
127 | 1,
128 | "regular",
129 | 1
130 | ]
131 | }
132 | ],
133 | "links": [
134 | [
135 | 10,
136 | 12,
137 | 0,
138 | 10,
139 | 2,
140 | "IMAGE"
141 | ],
142 | [
143 | 11,
144 | 10,
145 | 0,
146 | 11,
147 | 0,
148 | "IMAGE"
149 | ]
150 | ],
151 | "groups": [],
152 | "config": {},
153 | "extra": {
154 | "ds": {
155 | "scale": 0.7513148009015777,
156 | "offset": [
157 | 11.112206386364164,
158 | 66.47311795454547
159 | ]
160 | },
161 | "node_versions": {
162 | "comfy-core": "v0.3.10-42-gff83865",
163 | "comfyui-bria-api": "499ec5d104cc5110407eafce468ce1d47ac168b3"
164 | },
165 | "VHS_latentpreview": false,
166 | "VHS_latentpreviewrate": 0
167 | },
168 | "version": 0.4
169 | }
--------------------------------------------------------------------------------