├── .github └── workflows │ └── publish.yml ├── README.md ├── __init__.py ├── nodes.py ├── pyproject.toml ├── red_to_blue_gradient_latent_guidance_workflow.png ├── spiral.jpg └── workflows ├── playground_abunchofstuff.png ├── playground_new_autocfg.png └── playground_precfgpag.png /.github/workflows/publish.yml: -------------------------------------------------------------------------------- 1 | name: Publish to Comfy registry 2 | on: 3 | workflow_dispatch: 4 | push: 5 | branches: 6 | - main 7 | - master 8 | paths: 9 | - "pyproject.toml" 10 | 11 | jobs: 12 | publish-node: 13 | name: Publish Custom Node to registry 14 | runs-on: ubuntu-latest 15 | # if this is a forked repository. Skipping the workflow. 16 | if: github.event.repository.fork == false 17 | steps: 18 | - name: Check out code 19 | uses: actions/checkout@v4 20 | - name: Publish Custom Node 21 | uses: Comfy-Org/publish-node-action@main 22 | with: 23 | ## Add your own personal access token to your Github Repository secrets and reference it here. 24 | personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }} 25 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Pre CFG nodes 3 | 4 | A set of nodes to prepare the noise predictions before the CFG function 5 | 6 | All can be **chained and repeated** within the same workflow! 7 | 8 | They are to be highly compatible with most nodes. 9 | 10 | The order matters and depends on your needs. 11 | 12 | The best chaining order is therefore to be determined by your own preferences. 13 | 14 | All are to be used like any model patching node, right after the model loader. 15 | 16 | # Nodes: 17 | 18 | ## Other nodes 19 | 20 | There are now too many nodes for me to just add a screenshot and a bunch of details but it would be a shame not to describe them: 21 | 22 | - Perturbed attention guidance: adaptation of PAG as a pre-CFG node. 23 | - Variable CFG: Make you scale vary along the generation 24 | - channel multipliers 25 | - subtract prediction mean: gives more balanced colors 26 | - "flip flop": swap the positive with the negative. Since the order matter, you may chain it with other nodes and go back to the correct order after. For experimental purposes. 27 | - Shape attention (for SDXL) can turn off the input layer 8. 28 | - Support empty uncond: Combined with "menu>advanced>conditioning>set timestep range" at ~65% you can now get a speed boost on any workflow. 29 | - Set timestep range from sigmas: same as the default node except that you're using sigmas instead of step percentage 30 | - [The testing branch](https://github.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/tree/testing_wip) has a few more and is the current state of these nodes for me. 31 | 32 | 33 | ## Pre CFG automatic scale 34 | 35 | ![image](https://github.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/assets/15731540/0437bf5e-1864-41ce-b929-654612b648a6) 36 | 37 | ### mode: 38 | - Automatic CFG: applies the same predictable scaling as my other nodes based on this logic 39 | - Strict scaling: applies a scaling which will always give the exact desired value. This tends to create artifacts and random blurs if carried through the end. 40 | 41 | ### Support empty uncond: 42 | 43 | If you use the already available node named ConditioningSetTimestepRange you can stop generating a negative prediction earlier by letting your negative conditioning go through it while setting it like this: 44 | 45 | ![image](https://github.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/assets/15731540/4bb39087-d02a-4dd9-821d-dc1f43870eb0) 46 | 47 | This speeds up your generation speed by two for the steps where there is no negative. 48 | 49 | The only issue if you do this is that the CFG function will weight your positive prediction times your CFG scale against nothing and you will get a black image. 50 | 51 | "support_empty_uncond" therefore divides your positive prediction by your CFG scale and avoids this issue. 52 | 53 | Doing this combination is similar to the "boost" feature of my original automatic CFG node. It can also let you avoid artifacts if you want to use the strict scaling. 54 | 55 | If you want to use this option in a chained setup using this node multiple times I recommand to use it only once and on the last. 56 | 57 | ## Pre CFG perp-neg 58 | 59 | ![image](https://github.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/assets/15731540/606b2ff3-fb81-4964-8e6d-cee97011a623) 60 | 61 | Applies the already known [perp-neg logic](https://perp-neg.github.io/). 62 | 63 | Code taken and adapted from ComfyAnon implementation. 64 | 65 | The context length (added after the screenshot of the node) can be set to a higher value if you are using a tensor rt engine requiring a higher context length. 66 | 67 | For more details you can check [my node related to this "Conditioning crop or fill"](https://github.com/Extraltodeus/Uncond-Zero-for-ComfyUI?tab=readme-ov-file#conditioning-crop-or-fill) where I explain a bit more about this. 68 | 69 | ## Pre CFG sharpening (experimental) 70 | 71 | ![image](https://github.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/assets/15731540/ffca8fae-34b0-44fa-bcd5-dc2ed2c625ca) 72 | 73 | Subtract from the current step something from the previous step. This tends to make the images sharper and less saturated. 74 | 75 | A negative value can be set. 76 | 77 | ## Pre CFG exponentiation (experimental) 78 | 79 | ![image](https://github.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/assets/15731540/34367216-eccf-411e-8fab-c63ff0f24331) 80 | 81 | A value lower than one will simplify the end result and enhance the saturation / contrasts. 82 | 83 | A value higher than one will do the opposite and if pushed too far will most likely make a mess. 84 | 85 | ## Gradient scaling: 86 | 87 | Named like this because I initially wanted to test what would happen if I used, instead of a single CFG scale, a tensor shaped like the latent space with a gradual variation. And then why not try to use masks instead? And what if I could make it so each value will match as closely as possible another input image? 88 | 89 | The result is an arithmetic scaling method which does not noticeably slow down the sampling while also scaling the intensity of the values like an "automatic cfg". 90 | 91 | So here it is: 92 | 93 | ![image](https://github.com/user-attachments/assets/86e52c18-d85b-47cc-aee7-cf8750e50bb2) 94 | 95 | So, simply put: 96 | 97 | - Maximum scale: Which max CFG scale can be used to try to match the input? You can go as high as 500 and still get an output. At 1000 you should stop before the end. 98 | - Minimum scale: Same of course but this one I find better to let in between 3.5 and 5. 99 | - Strength: An overall multiplier for the effect. Generally left at 1 but if you use a plain color image and feel like your results are too smooth you may want to lower it. 100 | - end at sigma: You can go down to the end of the sampling if using the next described toggle but in general I prefer to stop at 0.28. Stopping before the end will give better result with super high scales. 0.28 is the default value. 101 | - Converging scales: make the min and max scales join your sampler scale as the sampling goes. This can weaken the pattern matching effect if you are aiming for something precise but otherwise greatly enhance the final result also allow the use of a bigger maximum scale. 102 | - invert mask: for convenience 103 | 104 | ### Potential uses: 105 | 106 | General light direction/composition influence (all same seed): 107 | 108 | ![combined_image](https://github.com/user-attachments/assets/647589b4-cea2-41c9-804f-fc59b7ba1b71) 109 | 110 | Vignetting: 111 | 112 | ![combined_v_images](https://github.com/user-attachments/assets/fd492fad-634f-43ce-9d48-918bc56103a9) 113 | 114 | Color influence: 115 | 116 | ![combined_rgb_image](https://github.com/user-attachments/assets/0e71e294-0d5f-4ab8-89ca-1012bc2528df) 117 | 118 | Pattern matching, here with a black and white spiral: 119 | 120 | ![00347UI_00001_](https://github.com/user-attachments/assets/3b030e29-ba5b-4841-bbe7-eb5ae59d652c) 121 | 122 | A blue one with a lower scale: 123 | 124 | ![00297UI_00001_](https://github.com/user-attachments/assets/bc271aa5-93d3-4438-8600-20ae05d47df3) 125 | 126 | As you can notice the details a pretty well done in general. It seems that using an input latent as a guide also helps with the overall quality. Using a "freshly" encoded latent, I haven't tried to loop back a latent space resulting from sampling directly. 127 | 128 | Text is a bit harder to enforce and may require more tweaking with the scales: 129 | 130 | ![00133UI_00001_](https://github.com/user-attachments/assets/9c8f1ae3-0411-401f-a6e8-3b4451479576) 131 | 132 | 133 | Since it takes advantage of the "wiggling room" left by the CFG scale so to make the generation match an image, it can hardly contradict what is being generated. 134 | 135 | Here, an example using a black and red spiral, since the base description is about black and white I could only enforce the red by using destructive scales: 136 | 137 | ![combined_side_by_side_image](https://github.com/user-attachments/assets/f0a85a4b-4ad3-4d20-8248-6d1e81bdddc9) 138 | 139 | ### Side use: 140 | 141 | - If only using a mask for the input, will apply the selected maximum scale to the target area. 142 | - If nothing is connected: will use the positive prediction as guide for 74% of the sigma and the negative for the last part. 143 | 144 | Note: 145 | 146 | - Given that this is a non-ml solution, unlike controlnet, it can not tell the difference in between a banana and a person. It simply tries to make the values match the input image. A giraffe is just an apple with different values at a different place. 147 | - It is possible to chain multiple times this node for as long as the sum of all the strength sliders is equal or below one. 148 | - I added two image generators. One simply using RGB sliders and a gradient generator which can also make circular patterns while outputting a mask, to make vignetting easy. You will find them in the "image" category. 149 | -------------------------------------------------------------------------------- /__init__.py: -------------------------------------------------------------------------------- 1 | from .nodes import * 2 | 3 | NODE_CLASS_MAPPINGS = {} 4 | 5 | NODE_CLASS_MAPPINGS_ADD = { 6 | "Pre CFG automatic scale": automatic_pre_cfg, 7 | "Pre CFG uncond zero": uncondZeroPreCFGNode, 8 | "Pre CFG perp-neg": pre_cfg_perp_neg, 9 | # "Pre CFG re-negative": pre_cfg_re_negative, 10 | 11 | "Pre CFG PAG": perturbed_attention_guidance_pre_cfg_node, 12 | "Pre CFG zero attention": zero_attention_pre_cfg_node, 13 | "Pre CFG channel multiplier": channel_multiplier_node, 14 | "Pre CFG multiplier": multiply_cond_pre_cfg_node, 15 | 16 | "Pre CFG norm neg to pos": norm_uncond_to_cond_pre_cfg_node, 17 | "Pre CFG subtract mean": PreCFGsubtractMeanNode, 18 | "Pre CFG variable scaling": variable_scale_pre_cfg_node, 19 | "Pre CFG gradient scaling": gradient_scaling_pre_cfg_node, 20 | 21 | "Pre CFG flip flop": flip_flip_conds_pre_cfg_node, 22 | "Pre CFG replace negative channel": replace_uncond_channel_pre_cfg_node, 23 | "Pre CFG merge negative channel": merge_uncond_channel_pre_cfg_node, 24 | 25 | "Pre CFG sharpening": condDiffSharpeningNode, 26 | "Pre CFG exponentiation": condExpNode, 27 | 28 | "Conditioning set timestep from sigma": ConditioningSetTimestepRangeFromSigma, 29 | "Support empty uncond": support_empty_uncond_pre_cfg_node, 30 | "Shape attention": ShapeAttentionNode, 31 | "Excellent attention": ExlAttentionNode, 32 | "Post CFG subtract mean": PostCFGsubtractMeanNode, 33 | "Individual channel selector": individual_channel_selection_node, 34 | "Subtract noise mean": latent_noise_subtract_mean_node, 35 | "Empty RGB image": EmptyRGBImage, 36 | "Gradient RGB image": GradientRGBImage, 37 | } 38 | 39 | NODE_CLASS_MAPPINGS.update(NODE_CLASS_MAPPINGS_ADD) 40 | 41 | for c in [4,8,16,32,64,128]: 42 | NODE_CLASS_MAPPINGS[f"Channel selector for {c} channels"] = type("channel_selection_node", (channel_selection_node,), { "CHANNELS_AMOUNT": c}) 43 | -------------------------------------------------------------------------------- /nodes.py: -------------------------------------------------------------------------------- 1 | import torch 2 | import torch.nn.functional as F 3 | from math import ceil, floor 4 | from copy import deepcopy 5 | import comfy.model_patcher 6 | from comfy.sampler_helpers import convert_cond 7 | from comfy.samplers import calc_cond_batch, encode_model_conds 8 | from comfy.ldm.modules.attention import optimized_attention_for_device 9 | from nodes import ConditioningConcat, ConditioningSetTimestepRange 10 | import comfy.model_management as model_management 11 | from comfy.latent_formats import SDXL as SDXL_Latent 12 | import os 13 | 14 | current_dir = os.path.dirname(os.path.realpath(__file__)) 15 | SDXL_Latent = SDXL_Latent() 16 | sdxl_latent_rgb_factors = SDXL_Latent.latent_rgb_factors 17 | ConditioningConcat = ConditioningConcat() 18 | ConditioningSetTimestepRange = ConditioningSetTimestepRange() 19 | default_attention = optimized_attention_for_device(model_management.get_torch_device()) 20 | default_device = model_management.get_torch_device() 21 | 22 | weighted_average = lambda tensor1, tensor2, weight1: (weight1 * tensor1 + (1 - weight1) * tensor2) 23 | selfnorm = lambda x: x / x.norm() 24 | minmaxnorm = lambda x: torch.nan_to_num((x - x.min()) / (x.max() - x.min()), nan=0.0, posinf=1.0, neginf=0.0) 25 | normlike = lambda x, y: x / x.norm() * y.norm() 26 | 27 | def get_sigma_min_max(model): 28 | model_sampling = model.model.model_sampling 29 | sigma_min = model_sampling.sigma(model_sampling.timestep(model_sampling.sigma_min)).item() 30 | sigma_max = model_sampling.sigma(model_sampling.timestep(model_sampling.sigma_max)).item() 31 | return sigma_min, sigma_max 32 | 33 | @torch.no_grad() 34 | def make_new_uncond_at_scale(cond,uncond,cond_scale,new_scale): 35 | new_scale_ratio = (new_scale - 1) / (cond_scale - 1) 36 | return cond * (1 - new_scale_ratio) + uncond * new_scale_ratio 37 | 38 | @torch.no_grad() 39 | def make_new_uncond_at_scale_co(conds_out,cond_scale,new_scale): 40 | new_scale_ratio = (new_scale - 1) / (cond_scale - 1) 41 | return conds_out[0] * (1 - new_scale_ratio) + conds_out[1] * new_scale_ratio 42 | 43 | @torch.no_grad() 44 | def get_denoised_at_scale(x_orig,cond,uncond,cond_scale): 45 | return x_orig - ((x_orig - uncond) + cond_scale * ((x_orig - cond) - (x_orig - uncond))) 46 | 47 | class pre_cfg_perp_neg: 48 | @classmethod 49 | def INPUT_TYPES(s): 50 | return {"required": { 51 | "model": ("MODEL",), 52 | "clip": ("CLIP",), 53 | "neg_scale": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 10.0, "step": 1/10, "round": 0.01}), 54 | "set_context_length" : ("BOOLEAN", {"default": False}), 55 | "context_length": ("INT", {"default": 1, "min": 1, "max": 100, "step": 1}), 56 | "start_at_sigma": ("FLOAT", {"default": 15, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 57 | "end_at_sigma": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 58 | # "cond_or_uncond": (["both","uncond"], {"default":"uncond"}), 59 | } 60 | } 61 | RETURN_TYPES = ("MODEL",) 62 | FUNCTION = "patch" 63 | 64 | CATEGORY = "model_patches/Pre CFG" 65 | 66 | def patch(self, model, clip, neg_scale, set_context_length, context_length, start_at_sigma, end_at_sigma, cond_or_uncond="uncond"): 67 | empty_cond, pooled = clip.encode_from_tokens(clip.tokenize(""), return_pooled=True) 68 | nocond = [[empty_cond, {"pooled_output": pooled}]] 69 | if context_length > 1 and set_context_length: 70 | short_nocond = deepcopy(nocond) 71 | for x in range(context_length - 1): 72 | (nocond,) = ConditioningConcat.concat(nocond, short_nocond) 73 | nocond = convert_cond(nocond) 74 | 75 | @torch.no_grad() 76 | def pre_cfg_perp_neg_function(args): 77 | conds_out = args["conds_out"] 78 | noise_pred_pos = conds_out[0] 79 | 80 | if args["sigma"][0] > start_at_sigma or args["sigma"][0] <= end_at_sigma or not torch.any(conds_out[1]): 81 | return conds_out 82 | 83 | noise_pred_neg = conds_out[1] 84 | 85 | model_options = args["model_options"] 86 | timestep = args["timestep"] 87 | model = args["model"] 88 | x = args["input"] 89 | 90 | nocond_processed = encode_model_conds(model.extra_conds, nocond, x, x.device, "negative") 91 | (noise_pred_nocond,) = calc_cond_batch(model, [nocond_processed], x, timestep, model_options) 92 | 93 | pos = noise_pred_pos - noise_pred_nocond 94 | neg = noise_pred_neg - noise_pred_nocond 95 | 96 | perp = neg - ((torch.mul(neg, pos).sum())/(torch.norm(pos)**2)) * pos 97 | perp_neg = perp * neg_scale 98 | 99 | if cond_or_uncond == "both": 100 | perp_p = pos - ((torch.mul(neg, pos).sum())/(torch.norm(neg)**2)) * neg 101 | perp_pos = perp_p * neg_scale 102 | conds_out[0] = noise_pred_nocond + perp_pos 103 | else: 104 | conds_out[0] = noise_pred_nocond + pos 105 | conds_out[1] = noise_pred_nocond + perp_neg 106 | 107 | return conds_out 108 | 109 | m = model.clone() 110 | m.set_model_sampler_pre_cfg_function(pre_cfg_perp_neg_function) 111 | return (m, ) 112 | 113 | class pre_cfg_re_negative: 114 | @classmethod 115 | def INPUT_TYPES(s): 116 | return {"required": { 117 | "model": ("MODEL",), 118 | "clip": ("CLIP",), 119 | "empty_proportion": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0, "step": 1/20, "round": 0.01}), 120 | "progressive_scale" : ("BOOLEAN", {"default": False}), 121 | "set_context_length" : ("BOOLEAN", {"default": False}), 122 | "context_length": ("INT", {"default": 1, "min": 1, "max": 100, "step": 1}), 123 | "end_at_sigma": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 124 | } 125 | } 126 | RETURN_TYPES = ("MODEL",) 127 | FUNCTION = "patch" 128 | 129 | CATEGORY = "model_patches/Pre CFG" 130 | 131 | def patch(self, model, clip, empty_proportion, progressive_scale, set_context_length, context_length, end_at_sigma): 132 | sigma_min, sigma_max = get_sigma_min_max(model) 133 | empty_cond, pooled = clip.encode_from_tokens(clip.tokenize(""), return_pooled=True) 134 | nocond = [[empty_cond, {"pooled_output": pooled}]] 135 | if context_length > 1 and set_context_length: 136 | short_nocond = deepcopy(nocond) 137 | for x in range(context_length - 1): 138 | (nocond,) = ConditioningConcat.concat(nocond, short_nocond) 139 | nocond = convert_cond(nocond) 140 | 141 | @torch.no_grad() 142 | def pre_cfg_patch(args): 143 | conds_out = args["conds_out"] 144 | sigma = args["sigma"][0] 145 | # cond_scale = args["cond_scale"] 146 | 147 | if sigma <= end_at_sigma or not torch.any(conds_out[1]): 148 | return conds_out 149 | 150 | model_options = args["model_options"] 151 | timestep = args["timestep"] 152 | model = args["model"] 153 | x_orig = args["input"] 154 | 155 | nocond_processed = encode_model_conds(model.extra_conds, nocond, x_orig, x_orig.device, "negative") 156 | (noise_pred_nocond,) = calc_cond_batch(model, [nocond_processed], x_orig, timestep, model_options) 157 | if progressive_scale: 158 | progression = (sigma - sigma_min) / (sigma_max - sigma_min) 159 | current_scale = progression * empty_proportion + (1 - progression) * (1 - empty_proportion) 160 | current_scale = torch.clamp(current_scale, min=0, max=1) 161 | conds_out[1] = current_scale * noise_pred_nocond + conds_out[1] * (1 - current_scale) 162 | else: 163 | conds_out[1] = empty_proportion * noise_pred_nocond + conds_out[1] * (1 - empty_proportion) 164 | 165 | return conds_out 166 | 167 | m = model.clone() 168 | m.set_model_sampler_pre_cfg_function(pre_cfg_patch) 169 | return (m, ) 170 | 171 | @torch.no_grad() 172 | def normalize_adjust(a,b,strength=1): 173 | norm_a = torch.linalg.norm(a) 174 | a = selfnorm(a) 175 | b = selfnorm(b) 176 | res = b - a * (a * b).sum() 177 | if res.isnan().any(): 178 | res = torch.nan_to_num(res, nan=0.0) 179 | a = a - res * strength 180 | return a * norm_a 181 | 182 | class condDiffSharpeningNode: 183 | @classmethod 184 | def INPUT_TYPES(s): 185 | return {"required": { 186 | "model": ("MODEL",), 187 | "do_on": (["both","cond","uncond"], {"default": "both"},), 188 | "scale": ("FLOAT", {"default": 0.75, "min": -10.0, "max": 10.0, "step": 1/20, "round": 1/100}), 189 | "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 100.0, "step": 1/100, "round": 1/100}), 190 | "end_at_sigma": ("FLOAT", {"default": 01.0, "min": 0.0, "max": 100.0, "step": 1/100, "round": 1/100}), 191 | } 192 | } 193 | RETURN_TYPES = ("MODEL",) 194 | FUNCTION = "patch" 195 | 196 | CATEGORY = "model_patches/Pre CFG" 197 | 198 | def patch(self, model, do_on, scale, start_at_sigma, end_at_sigma): 199 | model_sampling = model.model.model_sampling 200 | sigma_max = model_sampling.sigma(model_sampling.timestep(model_sampling.sigma_max)).item() 201 | prev_cond = None 202 | prev_uncond = None 203 | 204 | @torch.no_grad() 205 | def sharpen_conds_pre_cfg(args): 206 | nonlocal prev_cond, prev_uncond 207 | conds_out = args["conds_out"] 208 | uncond = torch.any(conds_out[1]) 209 | 210 | sigma = args["sigma"][0].item() 211 | first_step = sigma > (sigma_max - 1) 212 | if first_step: 213 | prev_cond = None 214 | prev_uncond = None 215 | 216 | for b in range(len(conds_out[0])): 217 | for c in range(len(conds_out[0][b])): 218 | if not first_step and sigma > end_at_sigma and sigma <= start_at_sigma: 219 | if prev_cond is not None and do_on in ['both','cond']: 220 | conds_out[0][b][c] = normalize_adjust(conds_out[0][b][c], prev_cond[b][c], scale) 221 | if prev_uncond is not None and uncond and do_on in ['both','uncond']: 222 | conds_out[1][b][c] = normalize_adjust(conds_out[1][b][c], prev_uncond[b][c], scale) 223 | 224 | prev_cond = conds_out[0] 225 | if uncond: 226 | prev_uncond = conds_out[1] 227 | 228 | return conds_out 229 | 230 | m = model.clone() 231 | m.set_model_sampler_pre_cfg_function(sharpen_conds_pre_cfg) 232 | return (m, ) 233 | 234 | @torch.no_grad() 235 | def normalized_pow(t,p): 236 | t_norm = t.norm() 237 | t_sign = t.sign() 238 | t_pow = (t / t_norm).abs().pow(p) 239 | t_pow = selfnorm(t_pow) * t_norm * t_sign 240 | return t_pow 241 | 242 | class condExpNode: 243 | @classmethod 244 | def INPUT_TYPES(s): 245 | return {"required": { 246 | "model": ("MODEL",), 247 | "do_on": (["both","cond","uncond"], {"default": "both"},), 248 | "exponent": ("FLOAT", {"default": 0.8, "min": 0.0, "max": 10.0, "step": 1/20, "round": 1/100}), 249 | } 250 | } 251 | RETURN_TYPES = ("MODEL",) 252 | FUNCTION = "patch" 253 | 254 | CATEGORY = "model_patches/Pre CFG" 255 | 256 | def patch(self, model, do_on, exponent): 257 | @torch.no_grad() 258 | def exponentiate_conds_pre_cfg(args): 259 | if args["sigma"][0] <= 1: return args["conds_out"] 260 | 261 | conds_out = args["conds_out"] 262 | uncond = torch.any(conds_out[1]) 263 | 264 | if do_on in ['both','uncond'] and not uncond: 265 | return conds_out 266 | 267 | for b in range(len(conds_out[0])): 268 | if do_on in ['both','cond']: 269 | conds_out[0][b] = normalized_pow(conds_out[0][b], exponent) 270 | if uncond and do_on in ['both','uncond']: 271 | conds_out[1][b] = normalized_pow(conds_out[1][b], exponent) 272 | 273 | return conds_out 274 | 275 | m = model.clone() 276 | m.set_model_sampler_pre_cfg_function(exponentiate_conds_pre_cfg) 277 | return (m, ) 278 | 279 | @torch.no_grad() 280 | def topk_average(latent, top_k=0.25, measure="average"): 281 | max_values = torch.topk(latent.flatten(), k=ceil(latent.numel()*top_k), largest=True ).values 282 | min_values = torch.topk(latent.flatten(), k=ceil(latent.numel()*top_k), largest=False).values 283 | value_range = measuring_methods[measure](max_values, min_values) 284 | return value_range 285 | 286 | apply_scaling_methods = { 287 | "individual": lambda c, m: c * torch.tensor(m).view(c.shape[0],1,1).to(c.device), 288 | "all_as_one": lambda c, m: c * m[0], 289 | "average_of_all_channels" : lambda c, m: c * (sum(m) / len(m)), 290 | "smallest_of_all_channels": lambda c, m: c * min(m), 291 | "biggest_of_all_channels" : lambda c, m: c * max(m), 292 | } 293 | 294 | measuring_methods = { 295 | "difference": lambda x, y: (x.mean() - y.mean()).abs() / 2, 296 | "average": lambda x, y: (x.mean() + y.abs().mean()) / 2, 297 | "biggest": lambda x, y: max(x.mean(), y.abs().mean()), 298 | } 299 | 300 | class automatic_pre_cfg: 301 | @classmethod 302 | def INPUT_TYPES(s): 303 | scaling_methods_names = [k for k in apply_scaling_methods] 304 | measuring_methods_names = [k for k in measuring_methods] 305 | return {"required": { 306 | "model": ("MODEL",), 307 | "scaling_method": (scaling_methods_names, {"default": scaling_methods_names[0]}), 308 | "min_max_method": ([m for m in measuring_methods], {"default": measuring_methods_names[1]}), 309 | "reference_CFG": ("FLOAT", {"default": 8, "min": 0.0, "max": 100, "step": 1/10, "round": 1/100}), 310 | "scale_multiplier": ("FLOAT", {"default": 0.8, "min": 0.0, "max": 100, "step": 1/100, "round": 1/100}), 311 | "top_k": ("FLOAT", {"default": 0.25, "min": 0.0, "max": 0.5, "step": 1/20, "round": 1/100}), 312 | }, 313 | "optional": { 314 | "channels_selection": ("CHANS",), 315 | } 316 | } 317 | RETURN_TYPES = ("MODEL","STRING",) 318 | RETURN_NAMES = ("MODEL","parameters",) 319 | FUNCTION = "patch" 320 | 321 | CATEGORY = "model_patches/Pre CFG" 322 | 323 | def patch(self, model, scaling_method, min_max_method="difference", reference_CFG=8, scale_multiplier=0.8, top_k=0.25, channels_selection=None): 324 | parameters_string = f"scaling_method: {scaling_method}\nmin_max_method: {min_max_method}" 325 | if channels_selection is not None: 326 | for x in range(len(channels_selection)): 327 | parameters_string += f"\nchannel {x+1}: {channels_selection[x]}" 328 | scaling_methods_names = [k for k in apply_scaling_methods] 329 | @torch.no_grad() 330 | def automatic_pre_cfg(args): 331 | conds_out = args["conds_out"] 332 | cond_scale = args["cond_scale"] 333 | uncond = torch.any(conds_out[1]) 334 | if reference_CFG == 0: 335 | reference_scale = cond_scale 336 | else: 337 | reference_scale = reference_CFG 338 | 339 | if not uncond: 340 | return conds_out 341 | 342 | if channels_selection is None: 343 | channels = [True for _ in range(conds_out[0].shape[-3])] 344 | else: 345 | channels = channels_selection 346 | 347 | for b in range(len(conds_out[0])): 348 | chans = [] 349 | 350 | if scaling_method == scaling_methods_names[1]: 351 | if all(channels): 352 | mes = topk_average(reference_scale * conds_out[0][b] - (reference_scale - 1) * conds_out[1][b], top_k=top_k, measure=min_max_method) 353 | else: 354 | cond_for_measure = torch.stack([conds_out[0][b][j] for j in range(len(channels)) if channels[j]]) 355 | uncond_for_measure = torch.stack([conds_out[1][b][j] for j in range(len(channels)) if channels[j]]) 356 | mes = topk_average(reference_scale * cond_for_measure - (reference_scale - 1) * uncond_for_measure, top_k=top_k, measure=min_max_method) 357 | chans.append(scale_multiplier / max(mes,0.01)) 358 | else: 359 | for c in range(len(conds_out[0][b])): 360 | if not channels[c]: 361 | if scaling_method == scaling_methods_names[0]: 362 | chans.append(1) 363 | continue 364 | mes = topk_average(reference_scale * conds_out[0][b][c] - (reference_scale - 1) * conds_out[1][b][c], top_k=top_k, measure=min_max_method) 365 | new_scale = scale_multiplier / max(mes,0.01) 366 | chans.append(new_scale) 367 | 368 | 369 | conds_out[0][b] = apply_scaling_methods[scaling_method](conds_out[0][b],chans) 370 | conds_out[1][b] = apply_scaling_methods[scaling_method](conds_out[1][b],chans) 371 | 372 | return conds_out 373 | 374 | m = model.clone() 375 | m.set_model_sampler_pre_cfg_function(automatic_pre_cfg) 376 | return (m, parameters_string,) 377 | 378 | class channel_selection_node: 379 | CHANNELS_AMOUNT = 4 380 | @classmethod 381 | def INPUT_TYPES(s): 382 | toggles = {f"channel_{x}" : ("BOOLEAN", {"default": True}) for x in range(s.CHANNELS_AMOUNT)} 383 | return {"required": toggles} 384 | 385 | RETURN_TYPES = ("CHANS",) 386 | FUNCTION = "exec" 387 | 388 | CATEGORY = "model_patches/Pre CFG/channels_selectors" 389 | 390 | def exec(self, **kwargs): 391 | chans = [] 392 | for k, v in kwargs.items(): 393 | if "channel_" in k: 394 | chans.append(v) 395 | return (chans, ) 396 | 397 | class individual_channel_selection_node: 398 | @classmethod 399 | def INPUT_TYPES(s): 400 | return {"required": { 401 | "exclude" : ("BOOLEAN", {"default": False}), 402 | "selected_channel": ("INT", {"default": 1, "min": 1, "max": 128}), 403 | "total_channels" : ("INT", {"default": 4, "min": 1, "max": 128}), 404 | } 405 | } 406 | 407 | RETURN_TYPES = ("CHANS",) 408 | FUNCTION = "exec" 409 | CATEGORY = "model_patches/Pre CFG/channels_selectors" 410 | def exec(self, exclude, selected_channel, total_channels): 411 | chans = [exclude for _ in range(total_channels)] 412 | chans[selected_channel - 1] = not exclude 413 | return (chans, ) 414 | 415 | class channel_multiplier_node: 416 | @classmethod 417 | def INPUT_TYPES(s): 418 | return {"required": { 419 | "model": ("MODEL",), 420 | "channel_1": ("FLOAT", {"default": 1, "min": -10.0, "max": 10.0, "step": 1/100, "round": 1/100}), 421 | "channel_2": ("FLOAT", {"default": 1, "min": -10.0, "max": 10.0, "step": 1/100, "round": 1/100}), 422 | "channel_3": ("FLOAT", {"default": 1, "min": -10.0, "max": 10.0, "step": 1/100, "round": 1/100}), 423 | "channel_4": ("FLOAT", {"default": 1, "min": -10.0, "max": 10.0, "step": 1/100, "round": 1/100}), 424 | "selection": (["both","cond","uncond"],), 425 | "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 100.0, "step": 1/100, "round": 1/100}), 426 | "end_at_sigma": ("FLOAT", {"default": 01.0, "min": 0.0, "max": 100.0, "step": 1/100, "round": 1/100}), 427 | } 428 | } 429 | RETURN_TYPES = ("MODEL",) 430 | FUNCTION = "patch" 431 | 432 | CATEGORY = "model_patches/Pre CFG" 433 | 434 | def patch(self, model, channel_1, channel_2, channel_3, channel_4, selection, start_at_sigma, end_at_sigma): 435 | chans = [channel_1, channel_2, channel_3, channel_4] 436 | @torch.no_grad() 437 | def channel_multiplier_function(args): 438 | conds_out = args["conds_out"] 439 | uncond = torch.any(conds_out[1]) 440 | sigma = args["sigma"] 441 | if sigma[0] <= end_at_sigma or sigma[0] > start_at_sigma: 442 | return conds_out 443 | for b in range(len(conds_out[0])): 444 | for c in range(len(conds_out[0][b])): 445 | if selection in ["both","cond"]: 446 | conds_out[0][b][c] *= chans[c] 447 | if uncond and selection in ["both","uncond"]: 448 | conds_out[1][b][c] *= chans[c] 449 | return conds_out 450 | 451 | m = model.clone() 452 | m.set_model_sampler_pre_cfg_function(channel_multiplier_function) 453 | return (m, ) 454 | 455 | class support_empty_uncond_pre_cfg_node: 456 | @classmethod 457 | def INPUT_TYPES(s): 458 | return {"required": {"model": ("MODEL",), 459 | "method": (["from cond","divide by CFG"],), 460 | }} 461 | RETURN_TYPES = ("MODEL",) 462 | FUNCTION = "patch" 463 | 464 | CATEGORY = "model_patches/Pre CFG" 465 | 466 | def patch(self, model, method): 467 | @torch.no_grad() 468 | def support_empty_uncond(args): 469 | conds_out = args["conds_out"] 470 | uncond = torch.any(conds_out[1]) 471 | cond_scale = args["cond_scale"] 472 | 473 | if not uncond and cond_scale > 1: 474 | if method == "divide by CFG": 475 | conds_out[0] /= cond_scale 476 | else: 477 | conds_out[1] = conds_out[0].clone() 478 | return conds_out 479 | 480 | m = model.clone() 481 | m.set_model_sampler_pre_cfg_function(support_empty_uncond) 482 | return (m, ) 483 | 484 | def replace_timestep(cond): 485 | cond = deepcopy(cond) 486 | cond[0]['timestep_start'] = 999999999.9 487 | cond[0]['timestep_end'] = 0.0 488 | return cond 489 | 490 | def check_if_in_timerange(conds,timestep_in): 491 | for c in conds: 492 | all_good = True 493 | if 'timestep_start' in c: 494 | timestep_start = c['timestep_start'] 495 | if timestep_in[0] > timestep_start: 496 | all_good = False 497 | if 'timestep_end' in c: 498 | timestep_end = c['timestep_end'] 499 | if timestep_in[0] < timestep_end: 500 | all_good = False 501 | if all_good: return True 502 | return False 503 | 504 | class zero_attention_pre_cfg_node: 505 | @classmethod 506 | def INPUT_TYPES(s): 507 | return {"required": {"model": ("MODEL",), 508 | "do_on": (["cond","uncond"], {"default": "uncond"},), 509 | "mix_scale": ("FLOAT", {"default": 1.5, "min": -2.0, "max": 2.0, "step": 1/2, "round": 1/100}), 510 | "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 511 | "end_at_sigma": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 512 | # "attention": (["both","self","cross"],), 513 | # "unet_block": (["input","middle","output"],), 514 | # "unet_block_id": ("INT", {"default": 8, "min": 0, "max": 20}), 515 | }} 516 | RETURN_TYPES = ("MODEL",) 517 | FUNCTION = "patch" 518 | 519 | CATEGORY = "model_patches/Pre CFG" 520 | 521 | def patch(self, model, do_on, mix_scale, start_at_sigma, end_at_sigma, attention="both", unet_block="input", unet_block_id=8): 522 | cond_index = 1 if do_on == "uncond" else 0 523 | attn = {"both":["attn1","attn2"],"self":["attn1"],"cross":["attn2"]}[attention] 524 | 525 | def zero_attention_function(q, k, v, extra_options, mask=None): 526 | return torch.zeros_like(q) 527 | 528 | @torch.no_grad() 529 | def zero_attention_pre_cfg_patch(args): 530 | conds_out = args["conds_out"] 531 | sigma = args["sigma"][0].item() 532 | 533 | if sigma > start_at_sigma or sigma <= end_at_sigma: 534 | return conds_out 535 | 536 | conds = args["conds"] 537 | cond_to_process = conds[cond_index] 538 | cond_generated = torch.any(conds_out[cond_index]) 539 | 540 | if not cond_generated: 541 | cond_to_process = replace_timestep(cond_to_process) 542 | elif mix_scale == 1: 543 | print(" Mix scale at one!\nPrediction not generated.\nUse the node ConditioningSetTimestepRange to avoid generating if you want to use this node.") 544 | return conds_out 545 | 546 | model_options = deepcopy(args["model_options"]) 547 | for att in attn: 548 | model_options = comfy.model_patcher.set_model_options_patch_replace(model_options, zero_attention_function, att, unet_block, unet_block_id) 549 | 550 | (noise_pred,) = calc_cond_batch(args['model'], [cond_to_process], args['input'], args['timestep'], model_options) 551 | 552 | if mix_scale == 1 or not cond_generated: 553 | conds_out[cond_index] = noise_pred 554 | elif cond_generated: 555 | conds_out[cond_index] = weighted_average(noise_pred,conds_out[cond_index],mix_scale) 556 | 557 | return conds_out 558 | 559 | m = model.clone() 560 | m.set_model_sampler_pre_cfg_function(zero_attention_pre_cfg_patch) 561 | return (m, ) 562 | 563 | class perturbed_attention_guidance_pre_cfg_node: 564 | @classmethod 565 | def INPUT_TYPES(s): 566 | return {"required": {"model": ("MODEL",), 567 | "scale": ("FLOAT", {"default": 0.5, "min": -2.0, "max": 10.0, "step": 1/20, "round": 1/100}), 568 | "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 569 | "end_at_sigma": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 570 | }} 571 | RETURN_TYPES = ("MODEL",) 572 | FUNCTION = "patch" 573 | 574 | CATEGORY = "model_patches/Pre CFG" 575 | 576 | def patch(self, model, scale, start_at_sigma, end_at_sigma, do_on="cond", attention="self", unet_block="middle", unet_block_id=0): 577 | cond_index = 1 if do_on == "uncond" else 0 578 | attn = {"both":["attn1","attn2"],"self":["attn1"],"cross":["attn2"]}[attention] 579 | 580 | def perturbed_attention_guidance(q, k, v, extra_options, mask=None): 581 | return v 582 | 583 | @torch.no_grad() 584 | def perturbed_attention_guidance_pre_cfg_patch(args): 585 | conds_out = args["conds_out"] 586 | sigma = args["sigma"][0].item() 587 | 588 | if sigma > start_at_sigma or sigma <= end_at_sigma: 589 | return conds_out 590 | 591 | conds = args["conds"] 592 | cond_to_process = conds[cond_index] 593 | cond_generated = torch.any(conds_out[cond_index]) 594 | 595 | if not cond_generated: 596 | return conds_out 597 | 598 | model_options = deepcopy(args["model_options"]) 599 | for att in attn: 600 | model_options = comfy.model_patcher.set_model_options_patch_replace(model_options, perturbed_attention_guidance, att, unet_block, unet_block_id) 601 | 602 | (noise_pred,) = calc_cond_batch(args['model'], [cond_to_process], args['input'], args['timestep'], model_options) 603 | 604 | conds_out[cond_index] = conds_out[cond_index] + (conds_out[cond_index] - noise_pred) * scale 605 | 606 | return conds_out 607 | 608 | m = model.clone() 609 | m.set_model_sampler_pre_cfg_function(perturbed_attention_guidance_pre_cfg_patch) 610 | return (m, ) 611 | 612 | def sigma_to_percent(model_sampling, sigma_value): 613 | if sigma_value >= 999999999.9: 614 | return 0.0 615 | if sigma_value <= 0.0: 616 | return 1.0 617 | sigma_tensor = torch.tensor([sigma_value], dtype=torch.float32) 618 | timestep = model_sampling.timestep(sigma_tensor) 619 | percent = 1.0 - (timestep.item() / 999.0) 620 | return percent 621 | 622 | class ConditioningSetTimestepRangeFromSigma: 623 | @classmethod 624 | def INPUT_TYPES(s): 625 | return {"required": {"model": ("MODEL",), 626 | "conditioning": ("CONDITIONING", ), 627 | "sigma_start" : ("FLOAT", {"default": 15.0, "min": 0.0, "max": 10000.0, "step": 0.01}), 628 | "sigma_end" : ("FLOAT", {"default": 0.0, "min": 0.0, "max": 10000.0, "step": 0.01}) 629 | }} 630 | RETURN_TYPES = ("CONDITIONING",) 631 | FUNCTION = "set_range" 632 | 633 | CATEGORY = "advanced/conditioning" 634 | 635 | def set_range(self, model, conditioning, sigma_start, sigma_end): 636 | model_sampling = model.model.model_sampling 637 | (c, ) = ConditioningSetTimestepRange.set_range(conditioning,sigma_to_percent(model_sampling, sigma_start),sigma_to_percent(model_sampling, sigma_end)) 638 | return (c, ) 639 | 640 | class ShapeAttentionNode: 641 | @classmethod 642 | def INPUT_TYPES(s): 643 | return { 644 | "required": { 645 | "model": ("MODEL",), 646 | "scale": ("FLOAT", {"default": 1.5, "min": 0.0, "max": 10.0, "step": 1/10, "round": 1/100}), 647 | # "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 648 | # "end_at_sigma": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 649 | # "enabled" : ("BOOLEAN", {"default": True}), 650 | # "attention": (["both","self","cross"],), 651 | # "unet_block": (["input","middle","output"],), 652 | # "unet_block_id": ("INT", {"default": 8, "min": 0, "max": 20}), # uncomment these lines if you want to have fun with the other layers 653 | } 654 | } 655 | 656 | RETURN_TYPES = ("MODEL",) 657 | FUNCTION = "patch" 658 | 659 | CATEGORY = "model_patches" 660 | 661 | def patch(self, model, scale, start_at_sigma=999999999.9, end_at_sigma=0.0, enabled=True, attention="self", unet_block="input", unet_block_id=8): 662 | attn = {"both":["attn1","attn2"],"self":["attn1"],"cross":["attn2"]}[attention] 663 | if scale == 1: 664 | print(" Shape attention disabled (scale is one)") 665 | if not enabled or scale == 1: 666 | return (model,) 667 | 668 | m = model.clone() 669 | 670 | def shape_attention(q, k, v, extra_options, mask=None): 671 | sigma = extra_options['sigmas'][0] 672 | if sigma > start_at_sigma or sigma <= end_at_sigma: 673 | return default_attention(q, k, v, extra_options['n_heads'], mask) 674 | if scale != 0: 675 | return default_attention(q, k, v, extra_options['n_heads'], mask) * scale 676 | else: 677 | return torch.zeros_like(q) 678 | 679 | for att in attn: 680 | m.model_options = comfy.model_patcher.set_model_options_patch_replace(m.model_options, shape_attention, att, unet_block, unet_block_id) 681 | 682 | return (m,) 683 | 684 | class ExlAttentionNode: 685 | @classmethod 686 | def INPUT_TYPES(s): 687 | return { 688 | "required": { 689 | "model": ("MODEL",), 690 | "scale": ("FLOAT", {"default": 2, "min": -1.0, "max": 10.0, "step": 1/10, "round": 1/100}), 691 | "enabled": ("BOOLEAN", {"default": True}), 692 | } 693 | } 694 | 695 | RETURN_TYPES = ("MODEL",) 696 | FUNCTION = "patch" 697 | 698 | CATEGORY = "model_patches" 699 | 700 | def patch(self, model, scale, enabled): 701 | if not enabled: 702 | return (model,) 703 | m = model.clone() 704 | def cross_patch(q, k, v, extra_options, mask=None): 705 | first_attention = default_attention(q, k, v, extra_options['n_heads'], mask) 706 | second_attention = normlike(q+(q-default_attention(first_attention, k, v, extra_options['n_heads'])), first_attention) * scale 707 | return second_attention 708 | m.model_options = comfy.model_patcher.set_model_options_patch_replace(m.model_options, cross_patch, "attn2", "middle", 0) 709 | return (m,) 710 | 711 | class PreCFGsubtractMeanNode: 712 | @classmethod 713 | def INPUT_TYPES(s): 714 | return { 715 | "required": { 716 | "model": ("MODEL",), 717 | # "per_channel" : ("BOOLEAN", {"default": False}), #It's just not good 718 | "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 719 | "end_at_sigma": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 720 | "enabled" : ("BOOLEAN", {"default": True}), 721 | } 722 | } 723 | 724 | RETURN_TYPES = ("MODEL",) 725 | FUNCTION = "patch" 726 | 727 | CATEGORY = "model_patches/Pre CFG" 728 | 729 | def patch(self, model, start_at_sigma, end_at_sigma, enabled, per_channel=False): 730 | if not enabled: return (model,) 731 | m = model.clone() 732 | def pre_cfg_function(args): 733 | conds_out = args["conds_out"] 734 | sigma = args["sigma"][0].item() 735 | if sigma > start_at_sigma or sigma <= end_at_sigma: 736 | return conds_out 737 | for x in range(len(conds_out)): 738 | if torch.any(conds_out[x]): 739 | for b in range(len(conds_out[x])): 740 | if per_channel: 741 | for c in range(len(conds_out[x][b])): 742 | conds_out[x][b][c] -= conds_out[x][b][c].mean() 743 | else: 744 | conds_out[x][b] -= conds_out[x][b].mean() 745 | return conds_out 746 | m.set_model_sampler_pre_cfg_function(pre_cfg_function) 747 | return (m,) 748 | 749 | class PostCFGsubtractMeanNode: 750 | @classmethod 751 | def INPUT_TYPES(s): 752 | return { 753 | "required": { 754 | "model": ("MODEL",), 755 | # "per_channel" : ("BOOLEAN", {"default": False}), #It's just not good 756 | "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 757 | "end_at_sigma": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 758 | "enabled" : ("BOOLEAN", {"default": True}), 759 | } 760 | } 761 | 762 | RETURN_TYPES = ("MODEL",) 763 | FUNCTION = "patch" 764 | 765 | CATEGORY = "model_patches" 766 | 767 | def patch(self, model, start_at_sigma, end_at_sigma, enabled, per_channel=False): 768 | if not enabled: return (model,) 769 | m = model.clone() 770 | def post_cfg_function(args): 771 | cfg_result = args["denoised"] 772 | sigma = args["sigma"][0].item() 773 | if sigma > start_at_sigma or sigma <= end_at_sigma: 774 | return cfg_result 775 | for b in range(len(cfg_result)): 776 | if per_channel: 777 | for c in range(len(cfg_result[b])): 778 | cfg_result[b][c] -= cfg_result[b][c].mean() 779 | else: 780 | cfg_result[b] -= cfg_result[b].mean() 781 | return cfg_result 782 | m.set_model_sampler_post_cfg_function(post_cfg_function) 783 | return (m,) 784 | 785 | class PostCFGDotNode: 786 | @classmethod 787 | def INPUT_TYPES(s): 788 | return { 789 | "required": { 790 | "model": ("MODEL",), 791 | "batch": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), 792 | "channel": ("INT", {"default": 0, "min": 0, "max": 100, "step": 1}), 793 | "coord_x": ("INT", {"default": 64, "min": 0, "max": 1000, "step": 1}), 794 | "coord_y": ("INT", {"default": 64, "min": 0, "max": 1000, "step": 1}), 795 | "value": ("FLOAT", {"default": 1, "min": -10.0, "max": 10.0, "step": 1/10, "round": 1/100}), 796 | "start_at_sigma": ("FLOAT", {"default": 0.1, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 797 | "end_at_sigma": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 798 | "enabled" : ("BOOLEAN", {"default": True}), 799 | } 800 | } 801 | 802 | RETURN_TYPES = ("MODEL",) 803 | FUNCTION = "patch" 804 | 805 | CATEGORY = "model_patches" 806 | 807 | def patch(self, model, batch, channel, coord_x, coord_y, value, start_at_sigma, end_at_sigma, enabled): 808 | if not enabled: return (model,) 809 | m = model.clone() 810 | def post_cfg_function(args): 811 | cfg_result = args["denoised"] 812 | sigma = args["sigma"][0].item() 813 | if sigma > start_at_sigma or sigma <= end_at_sigma: 814 | return cfg_result 815 | 816 | channel_norm = cfg_result[batch][channel].norm() 817 | cfg_result[batch][channel] /= channel_norm 818 | cfg_result[batch][channel][coord_y][coord_x] = value 819 | cfg_result[batch][channel] *= channel_norm 820 | 821 | return cfg_result 822 | 823 | m.set_model_sampler_post_cfg_function(post_cfg_function) 824 | return (m,) 825 | 826 | class uncondZeroPreCFGNode: 827 | @classmethod 828 | def INPUT_TYPES(s): 829 | scaling_methods_names = [k for k in apply_scaling_methods] 830 | return {"required": { 831 | "model": ("MODEL",), 832 | "scale": ("FLOAT", {"default": 0.75, "min": 0.0, "max": 10.0, "step": 1/20, "round": 0.01}), 833 | "start_at_sigma": ("FLOAT", {"default": 100, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 834 | "end_at_sigma": ("FLOAT", {"default": 1.0, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 835 | "scaling_method": (scaling_methods_names, {"default": scaling_methods_names[2]}), 836 | } 837 | } 838 | RETURN_TYPES = ("MODEL",) 839 | FUNCTION = "patch" 840 | 841 | CATEGORY = "model_patches/Pre CFG" 842 | 843 | def patch(self, model, scale, start_at_sigma, end_at_sigma, scaling_method): 844 | scaling_methods_names = [k for k in apply_scaling_methods] 845 | @torch.no_grad() 846 | def uncond_zero_pre_cfg(args): 847 | conds_out = args["conds_out"] 848 | uncond = torch.any(conds_out[1]) 849 | sigma = args["sigma"][0].item() 850 | if uncond or sigma <= end_at_sigma or sigma > start_at_sigma: 851 | return conds_out 852 | 853 | for b in range(len(conds_out[0])): 854 | chans = [] 855 | if scaling_method == scaling_methods_names[1]: 856 | mes = topk_average(8 * conds_out[0][b] - 7 * conds_out[1][b], measure="difference") 857 | for c in range(len(conds_out[0][b])): 858 | mes = topk_average(conds_out[0][b][c], measure="difference") ** 0.5 859 | chans.append(scale / mes) 860 | conds_out[0][b] = apply_scaling_methods[scaling_method](conds_out[0][b],chans) 861 | return conds_out 862 | 863 | m = model.clone() 864 | m.set_model_sampler_pre_cfg_function(uncond_zero_pre_cfg) 865 | return (m, ) 866 | 867 | class variable_scale_pre_cfg_node: 868 | @classmethod 869 | def INPUT_TYPES(s): 870 | return {"required": {"model": ("MODEL",), 871 | "target_scale": ("FLOAT", {"default": 5.0, "min": 1.0, "max": 100.0, "step": 1/2, "round": 1/100}), 872 | "target_as_start": ("BOOLEAN", {"default": True}), 873 | "proportional_to": (["sigma","steps progression"],), 874 | }} 875 | RETURN_TYPES = ("MODEL",) 876 | FUNCTION = "patch" 877 | 878 | CATEGORY = "model_patches/Pre CFG" 879 | 880 | def patch(self, model, target_scale, target_as_start, proportional_to): 881 | model_sampling = model.model.model_sampling 882 | sigma_max = model_sampling.sigma(model_sampling.timestep(model_sampling.sigma_max)).item() 883 | 884 | @torch.no_grad() 885 | def variable_scale_pre_cfg_patch(args): 886 | conds_out = args["conds_out"] 887 | cond_scale = args["cond_scale"] 888 | sigma = args["sigma"][0].item() 889 | scales = [cond_scale,target_scale] 890 | 891 | if not torch.any(conds_out[1]): 892 | return conds_out 893 | 894 | if proportional_to == "steps progression": 895 | progression = sigma_to_percent(model_sampling, sigma) 896 | else: 897 | progression = 1 - sigma / sigma_max 898 | progression = max(min(progression, 1), 0) 899 | 900 | current_scale = scales[target_as_start] * (1 - progression) + scales[not target_as_start] * progression 901 | new_scale = (current_scale - 1) / (cond_scale - 1) 902 | conds_out[1] = weighted_average(conds_out[1], conds_out[0], new_scale) 903 | 904 | return conds_out 905 | 906 | m = model.clone() 907 | m.set_model_sampler_pre_cfg_function(variable_scale_pre_cfg_patch) 908 | return (m, ) 909 | 910 | class latent_noise_subtract_mean_node: 911 | def __init__(self): 912 | pass 913 | @classmethod 914 | def INPUT_TYPES(s): 915 | return {"required": { 916 | "latent_input": ("LATENT", {"forceInput": True}), 917 | "enabled" : ("BOOLEAN", {"default": True}), 918 | }} 919 | FUNCTION = "exec" 920 | RETURN_TYPES = ("LATENT",) 921 | CATEGORY = "latent" 922 | 923 | def exec(self, latent_input, enabled): 924 | if not enabled: 925 | return (latent_input,) 926 | new_latents = deepcopy(latent_input) 927 | for x in range(len(new_latents['samples'])): 928 | new_latents['samples'][x] -= torch.mean(new_latents['samples'][x]) 929 | return (new_latents,) 930 | 931 | class flip_flip_conds_pre_cfg_node: 932 | @classmethod 933 | def INPUT_TYPES(s): 934 | return {"required": {"model": ("MODEL",), 935 | "enabled" : ("BOOLEAN", {"default": True}) 936 | }} 937 | RETURN_TYPES = ("MODEL",) 938 | FUNCTION = "patch" 939 | 940 | CATEGORY = "model_patches/Pre CFG" 941 | 942 | def patch(self, model, enabled): 943 | @torch.no_grad() 944 | def pre_cfg_patch(args): 945 | conds_out = args["conds_out"] 946 | uncond = torch.any(conds_out[1]) 947 | 948 | if not uncond or not enabled: 949 | return conds_out 950 | 951 | conds_out[0], conds_out[1] = conds_out[1], conds_out[0] 952 | return conds_out 953 | 954 | m = model.clone() 955 | m.set_model_sampler_pre_cfg_function(pre_cfg_patch) 956 | return (m, ) 957 | 958 | class norm_uncond_to_cond_pre_cfg_node: 959 | @classmethod 960 | def INPUT_TYPES(s): 961 | return {"required": {"model": ("MODEL",), 962 | "enabled" : ("BOOLEAN", {"default": True}) 963 | }} 964 | RETURN_TYPES = ("MODEL",) 965 | FUNCTION = "patch" 966 | 967 | CATEGORY = "model_patches/Pre CFG" 968 | 969 | def patch(self, model, enabled): 970 | @torch.no_grad() 971 | def pre_cfg_patch(args): 972 | conds_out = args["conds_out"] 973 | uncond = torch.any(conds_out[1]) 974 | 975 | if not uncond or not enabled: 976 | return conds_out 977 | 978 | conds_out[1] = conds_out[1] / conds_out[1].norm() * conds_out[0].norm() 979 | return conds_out 980 | 981 | m = model.clone() 982 | m.set_model_sampler_pre_cfg_function(pre_cfg_patch) 983 | return (m, ) 984 | 985 | class replace_uncond_channel_pre_cfg_node: 986 | @classmethod 987 | def INPUT_TYPES(s): 988 | return {"required": {"model": ("MODEL",), 989 | "channel": ("INT", {"default": 1, "min": 1, "max": 128, "step": 1}), 990 | "enabled" : ("BOOLEAN", {"default": True}) 991 | }} 992 | RETURN_TYPES = ("MODEL",) 993 | FUNCTION = "patch" 994 | 995 | CATEGORY = "model_patches/Pre CFG" 996 | 997 | def patch(self, model, channel, enabled): 998 | @torch.no_grad() 999 | def pre_cfg_patch(args): 1000 | conds_out = args["conds_out"] 1001 | uncond = torch.any(conds_out[1]) 1002 | 1003 | if not uncond or not enabled: 1004 | return conds_out 1005 | 1006 | for b in range(len(conds_out[0])): 1007 | if len(conds_out[1][b]) < channel: 1008 | print(F" WRONG CHANNEL SELECTED. THE LATENT SPACE ONLY HAS {len(conds_out[1][b])} CHANNELS") 1009 | else: 1010 | conds_out[1][b][channel - 1] = conds_out[0][b][channel - 1] 1011 | 1012 | return conds_out 1013 | 1014 | m = model.clone() 1015 | m.set_model_sampler_pre_cfg_function(pre_cfg_patch) 1016 | return (m, ) 1017 | 1018 | class merge_uncond_channel_pre_cfg_node: 1019 | @classmethod 1020 | def INPUT_TYPES(s): 1021 | return {"required": {"model": ("MODEL",), 1022 | "channel": ("INT", {"default": 1, "min": 1, "max": 128, "step": 1}), 1023 | "CFG_scale": ("FLOAT", {"default": 5, "min": 2.0, "max": 100.0, "step": 1/2, "round": 1/100}), 1024 | "start_at_sigma": ("FLOAT", {"default": 15.0, "min": 0.0, "max": 100.0, "step": 1/100, "round": 1/100}), 1025 | "end_at_sigma": ("FLOAT", {"default": 01.0, "min": 0.0, "max": 100.0, "step": 1/100, "round": 1/100}), 1026 | "enabled" : ("BOOLEAN", {"default": True}) 1027 | }} 1028 | RETURN_TYPES = ("MODEL",) 1029 | FUNCTION = "patch" 1030 | 1031 | CATEGORY = "model_patches/Pre CFG" 1032 | 1033 | def patch(self, model, channel, CFG_scale, start_at_sigma, end_at_sigma, enabled): 1034 | if not enabled: return model, 1035 | @torch.no_grad() 1036 | def pre_cfg_patch(args): 1037 | conds_out = args["conds_out"] 1038 | cond_scale = args["cond_scale"] 1039 | sigma = args["sigma"][0].item() 1040 | 1041 | if not torch.any(conds_out[1]) or sigma <= end_at_sigma or sigma > start_at_sigma: 1042 | return conds_out 1043 | 1044 | for b in range(len(conds_out[0])): 1045 | if len(conds_out[1][b]) < channel: 1046 | print(F" WRONG CHANNEL SELECTED. THE LATENT SPACE ONLY HAS {len(conds_out[1][b])} CHANNELS") 1047 | else: 1048 | new_scale = (CFG_scale - 1) / (cond_scale - 1) 1049 | conds_out[1][b][channel - 1] = weighted_average(conds_out[1][b][channel - 1], conds_out[0][b][channel - 1], new_scale) 1050 | return conds_out 1051 | 1052 | m = model.clone() 1053 | m.set_model_sampler_pre_cfg_function(pre_cfg_patch) 1054 | return (m, ) 1055 | 1056 | class multiply_cond_pre_cfg_node: 1057 | @classmethod 1058 | def INPUT_TYPES(s): 1059 | return {"required": {"model": ("MODEL",), 1060 | "selection": (["both","cond","uncond"],), 1061 | "value": ("FLOAT", {"default": 0, "min": -100.0, "max": 100.0, "step": 1/100, "round": 1/100}), 1062 | "enabled" : ("BOOLEAN", {"default": True}) 1063 | }} 1064 | RETURN_TYPES = ("MODEL",) 1065 | FUNCTION = "patch" 1066 | 1067 | CATEGORY = "model_patches/Pre CFG" 1068 | 1069 | def patch(self, model, selection, value, enabled): 1070 | @torch.no_grad() 1071 | def pre_cfg_patch(args): 1072 | conds_out = args["conds_out"] 1073 | uncond = torch.any(conds_out[1]) 1074 | 1075 | if (not uncond and selection in ["both","uncond"]) or not enabled: 1076 | return conds_out 1077 | 1078 | if selection in ["both","cond"]: 1079 | conds_out[0] = conds_out[0] * value 1080 | if selection in ["both","uncond"]: 1081 | conds_out[1] = conds_out[1] * value 1082 | 1083 | return conds_out 1084 | 1085 | m = model.clone() 1086 | m.set_model_sampler_pre_cfg_function(pre_cfg_patch) 1087 | return (m, ) 1088 | 1089 | def generate_gradient_mask(tensor, horizontal=False): 1090 | dim = 3 if horizontal else 2 1091 | gradient = torch.linspace(0, 1, steps=tensor.size(dim), device=tensor.device) 1092 | if horizontal: 1093 | merging_gradient = gradient.repeat(tensor.size(0), tensor.size(1), tensor.size(2), 1) 1094 | else: 1095 | merging_gradient = gradient.unsqueeze(1).repeat(tensor.size(0), tensor.size(1), 1, tensor.size(3)) 1096 | return merging_gradient 1097 | 1098 | class gradient_scaling_pre_cfg_node: 1099 | @classmethod 1100 | def INPUT_TYPES(s): 1101 | return {"required": {"model": ("MODEL",), 1102 | "maximum_scale": ("FLOAT", {"default": 80, "min": 0.0, "max": 1000.0, "step": 1, "round": 1/100, "tooltip":"It is an equivalent to the CFG scale."}), 1103 | "minimum_scale": ("FLOAT", {"default": 4.5, "min": 0.0, "max": 10.0, "step": 1/2, "round": 1/100, "tooltip":"It is an equivalent to the CFG scale."}), 1104 | "strength": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 10.0, "step": 1/10, "round": 1/10}), 1105 | "end_at_sigma": ("FLOAT", {"default": 0.28, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 1106 | # "free_scale" : ("BOOLEAN", {"default": False}), 1107 | "converging_scales" : ("BOOLEAN", {"default": True}), 1108 | # "noise_add_diff" : ("BOOLEAN", {"default": True}), 1109 | # "split_channels" : ("BOOLEAN", {"default": False}), 1110 | "invert_mask" : ("BOOLEAN", {"default": False}), 1111 | # "no_input" : (["rand","rev","cond","uncond","swap","r_swap","diff","add_diff","rand_rev","rev_cond","rand_cond","rev_cond_sp","cond_rev_sp"],), 1112 | # "start_at_sigma": ("FLOAT", {"default": 15, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 1113 | # "end_at_sigma": ("FLOAT", {"default": 0.28, "min": 0.0, "max": 1000.0, "step": 1/100, "round": 1/100}), 1114 | }, 1115 | "optional":{ 1116 | "input_mask": ("MASK", {"tooltip":"If only a mask is connected the scale becomes a CFG scale of what is being masked.\nWhen a latent is connected the mask defines what will be modified by the node."},), 1117 | "input_latent": ("LATENT", {"tooltip":"If a latent is connected the scale becomes the maximum scale allowed in which to seek similarity."},), 1118 | } 1119 | } 1120 | RETURN_TYPES = ("MODEL",) 1121 | FUNCTION = "patch" 1122 | 1123 | CATEGORY = "model_patches/Pre CFG" 1124 | 1125 | def get_latent_guidance_mask_channel(self,x_orig,cond,uncond,guide,minimum_scale,maximum_scale,noise_add_diff): 1126 | scales = torch.zeros_like(x_orig, device=x_orig.device) 1127 | for b in range(cond.shape[0]): 1128 | for c in range(cond.shape[1]): 1129 | scales[b][c] = self.get_latent_guidance_mask(x_orig[b][c],cond[b][c],uncond[b][c],guide[0][c],minimum_scale,maximum_scale,noise_add_diff) 1130 | return scales 1131 | 1132 | @torch.no_grad() 1133 | def get_latent_guidance_mask(self,x_orig,cond,uncond,guide,minimum_scale,maximum_scale,noise_add_diff): 1134 | low_denoised = get_denoised_at_scale(x_orig,cond,uncond,minimum_scale) 1135 | high_denoised = get_denoised_at_scale(x_orig,cond,uncond,maximum_scale) 1136 | if noise_add_diff: 1137 | guide = guide + (guide - (x_orig * guide.norm() / x_orig.norm())) 1138 | guide = guide / guide.norm() 1139 | low_diff = (low_denoised - guide * low_denoised.norm()).abs() 1140 | high_diff = (high_denoised - guide * high_denoised.norm()).abs() 1141 | return torch.clamp(low_diff / high_diff, min=0, max=1) 1142 | 1143 | def patch(self, model, maximum_scale, minimum_scale, invert_mask, strength, end_at_sigma, start_at_sigma=99999, no_input="swap", noise_add_diff=True, converging_scales=False, split_channels=False, free_scale=False, input_mask=None, input_latent=None): 1144 | sigma_min, sigma_max = get_sigma_min_max(model) 1145 | model_sampling = model.model.model_sampling 1146 | scaling_function = self.get_latent_guidance_mask_channel if split_channels else self.get_latent_guidance_mask 1147 | mask_as_weight = None 1148 | latent_as_guidance = None 1149 | random_guidance = False 1150 | if input_mask is not None: 1151 | mask_as_weight = input_mask.clone().to(device=default_device) 1152 | if invert_mask: 1153 | mask_as_weight = 1 - mask_as_weight 1154 | if mask_as_weight.dim() == 3: 1155 | mask_as_weight = mask_as_weight.unsqueeze(1) 1156 | if input_latent is not None: 1157 | latent_as_guidance = input_latent["samples"].clone().to(device=default_device) 1158 | elif input_mask is None: 1159 | random_guidance = True 1160 | 1161 | snc = lambda x: x / x.norm() 1162 | trl = lambda x: torch.randn_like(x,device=x.device) 1163 | no_input_operations = { 1164 | "rand": lambda x, y, o, z, s: snc(trl(x)) * x.norm(), 1165 | "rev": lambda x, y, o, z, s: x * -1, 1166 | "cond": lambda x, y, o, z, s: snc(y) * x.norm(), 1167 | "uncond": lambda x, y, o, z, s: snc(o) * x.norm() * -1, 1168 | "swap": lambda x, y, o, z, s: no_input_operations["cond"](x, y, o, z, s) if s > 0.36 else no_input_operations["uncond"](x, y, o, z, s), 1169 | "r_swap": lambda x, y, o, z, s: no_input_operations["cond"](x, y, o, z, s) if s <= 0.36 else no_input_operations["uncond"](x, y, o, z, s), 1170 | "diff": lambda x, y, o, z, s: snc(y - o) * x.norm(), 1171 | "add_diff": lambda x, y, o, z, s: snc(y + y - o) * x.norm(), 1172 | "rand_rev": lambda x, y, o, z, s: snc(trl(x)) * x.norm(), 1173 | "rev_cond": lambda x, y, o, z, s: (snc(x) * -1 + snc(y) * 0.5) * x.norm() / 1.5, 1174 | "rand_cond": lambda x, y, o, z, s: (snc(x) * -1 + snc(trl(x)) * 0.5) * x.norm() / 1.5, 1175 | "rev_cond_sp": lambda x, y, o, z, s: no_input_operations["rev"](x,y,z) * z + (1 - z) * no_input_operations["cond"](x,y,z), 1176 | "cond_rev_sp": lambda x, y, o, z, s: no_input_operations["rev"](x,y,z) * (1 - z) + z * no_input_operations["cond"](x,y,z), 1177 | } 1178 | 1179 | @torch.no_grad() 1180 | def pre_cfg_patch(args): 1181 | nonlocal mask_as_weight, latent_as_guidance 1182 | conds_out = args["conds_out"] 1183 | cond_scale = args["cond_scale"] 1184 | x_orig = args['input'] 1185 | sigma = args["sigma"][0] 1186 | sp = min(1,max(0,sigma_to_percent(model_sampling, sigma - sigma_min * 3) + 1 / 100)) ** 2 1187 | 1188 | if not torch.any(conds_out[1]) or sigma <= end_at_sigma or sigma > start_at_sigma or (converging_scales and sp == 1): 1189 | return conds_out 1190 | 1191 | if converging_scales: 1192 | current_maximum_scale = sp * cond_scale + (1 - sp) * maximum_scale 1193 | current_minimum_scale = sp * cond_scale + (1 - sp) * minimum_scale 1194 | else: 1195 | current_maximum_scale = maximum_scale 1196 | current_minimum_scale = minimum_scale 1197 | 1198 | if mask_as_weight is not None and mask_as_weight.shape[-2:] != conds_out[1].shape[-2:]: 1199 | mask_as_weight = F.interpolate(mask_as_weight, size=(conds_out[1].shape[-2], conds_out[1].shape[-1]), mode='bilinear', align_corners=False) 1200 | 1201 | if random_guidance: 1202 | latent_as_guidance = no_input_operations[no_input](x_orig.clone(),conds_out[0].clone(),conds_out[1].clone(),sp,sigma/sigma_max) 1203 | 1204 | if latent_as_guidance is not None: 1205 | if latent_as_guidance.shape[-2:] != conds_out[1].shape[-2:]: 1206 | latent_as_guidance = F.interpolate(latent_as_guidance, size=(conds_out[1].shape[-2], conds_out[1].shape[-1]), mode='bilinear', align_corners=False) 1207 | 1208 | scaling_weight = scaling_function(x_orig,conds_out[0],conds_out[1],latent_as_guidance.clone(),current_minimum_scale,current_maximum_scale,noise_add_diff) 1209 | 1210 | target_scales = scaling_weight * current_maximum_scale + (1 - scaling_weight) * current_minimum_scale 1211 | 1212 | if free_scale: 1213 | target_scales = target_scales * cond_scale / target_scales.mean() 1214 | 1215 | global_multiplier = strength 1216 | if mask_as_weight is not None: 1217 | global_multiplier = global_multiplier * mask_as_weight 1218 | 1219 | target_scales = target_scales * global_multiplier + torch.full_like(target_scales, cond_scale) * (1 - global_multiplier) 1220 | conds_out[1] = make_new_uncond_at_scale(conds_out[0],conds_out[1],cond_scale,target_scales) 1221 | return conds_out 1222 | else: 1223 | target_scales = maximum_scale * mask_as_weight * strength + torch.full_like(conds_out[1], cond_scale) * (1 - mask_as_weight * strength) 1224 | conds_out[1] = make_new_uncond_at_scale(conds_out[0],conds_out[1],cond_scale,target_scales) 1225 | return conds_out 1226 | 1227 | m = model.clone() 1228 | m.set_model_sampler_pre_cfg_function(pre_cfg_patch) 1229 | return (m, ) 1230 | 1231 | class EmptyRGBImage: 1232 | @classmethod 1233 | def INPUT_TYPES(s): 1234 | return {"required": { "width": ("INT", {"default": 1024, "min": 1, "max": 16384, "step": 1}), 1235 | "height": ("INT", {"default": 1024, "min": 1, "max": 16384, "step": 1}), 1236 | "batch_size": ("INT", {"default": 1, "min": 1, "max": 4096}), 1237 | "r": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}), 1238 | "g": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}), 1239 | "b": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}), 1240 | }, 1241 | "optional": { 1242 | "grayscale_to_color": ("IMAGE",), 1243 | }} 1244 | RETURN_TYPES = ("IMAGE",) 1245 | FUNCTION = "generate" 1246 | CATEGORY = "image" 1247 | def generate(self, width, height, batch_size=1, r=0, g=0, b=0, grayscale_to_color=None): 1248 | if grayscale_to_color is not None: 1249 | grayscale_to_color = grayscale_to_color.permute(0, 3, 1, 2).mean(dim=1).unsqueeze(-1) 1250 | height = grayscale_to_color.shape[1] 1251 | width = grayscale_to_color.shape[2] 1252 | r_normalized = torch.full([batch_size, height, width, 1], r / 255.0) 1253 | g_normalized = torch.full([batch_size, height, width, 1], g / 255.0) 1254 | b_normalized = torch.full([batch_size, height, width, 1], b / 255.0) 1255 | rgb_image = torch.cat((r_normalized, g_normalized, b_normalized), dim=-1) 1256 | if grayscale_to_color is not None: 1257 | rgb_image = rgb_image * grayscale_to_color 1258 | return (rgb_image,) 1259 | 1260 | gradient_patterns = { 1261 | "linear": lambda x, y: x, 1262 | "sine": lambda x, y: torch.sin(x * torch.pi * y), 1263 | "triangle": lambda x, y: 2 * torch.abs(torch.round(x % (1 / max(y, 1)) * y) - (x % (1 / max(y, 1)) * y)), 1264 | } 1265 | 1266 | class GradientRGBImage: 1267 | @classmethod 1268 | def INPUT_TYPES(s): 1269 | return {"required": { "width": ("INT", {"default": 1024, "min": 0, "max": 16384, "step": 64}), 1270 | "height": ("INT", {"default": 1024, "min": 0, "max": 16384, "step": 64}), 1271 | "r1": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}), 1272 | "g1": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}), 1273 | "b1": ("INT", {"default": 0, "min": 0, "max": 255, "step": 1}), 1274 | "r2": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}), 1275 | "g2": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}), 1276 | "b2": ("INT", {"default": 255, "min": 0, "max": 255, "step": 1}), 1277 | "axis" : (["vertical","horizontal","circular"],), 1278 | "power_to": ("INT", {"default": 1, "min": 1, "max": 16, "step": 1}), 1279 | "reverse_power" : ("BOOLEAN", {"default": False}), 1280 | }, 1281 | "optional":{ 1282 | "mask": ("MASK",), 1283 | } 1284 | } 1285 | RETURN_TYPES = ("IMAGE","MASK",) 1286 | FUNCTION = "generate" 1287 | CATEGORY = "image" 1288 | 1289 | def get_gradient_mask(self,width,height,horizontal): 1290 | if horizontal: 1291 | return torch.linspace(0, 1, width).view(1, 1, width).repeat(1, height, 1) 1292 | return torch.linspace(0, 1, height).view(1, height, 1).repeat(1, 1, width) 1293 | 1294 | def generate(self, width, height, batch_size=1, r1=0, g1=0, b1=0, r2=255, g2=255, b2=255, pattern_value=1, power_to=1, reverse_power=False, axis="vertical", mask=None): 1295 | gradient = self.get_gradient_mask(width, height, axis in ["horizontal","circular"]) 1296 | gradient = gradient_patterns["linear" if axis != "circular" else "sine"](gradient, pattern_value) 1297 | 1298 | if axis == "circular": 1299 | gradient2 = self.get_gradient_mask(width, height, False) 1300 | gradient2 = gradient_patterns["sine"](gradient2, pattern_value) 1301 | gradient = gradient * gradient2 1302 | 1303 | if power_to > 1: 1304 | if reverse_power: gradient = 1 - gradient 1305 | gradient = gradient ** power_to 1306 | if reverse_power: gradient = 1 - gradient 1307 | 1308 | if mask is not None: 1309 | if mask.shape != gradient.shape: 1310 | mask = F.interpolate(mask.unsqueeze(1), size=(gradient.shape[-2], gradient.shape[-1]), mode='nearest').squeeze(1) 1311 | gradient = gradient * mask 1312 | 1313 | gradient = gradient.squeeze(0).unsqueeze(-1) 1314 | 1315 | r_gradient = r1 / 255.0 + gradient * (r2 - r1) / 255.0 1316 | g_gradient = g1 / 255.0 + gradient * (g2 - g1) / 255.0 1317 | b_gradient = b1 / 255.0 + gradient * (b2 - b1) / 255.0 1318 | 1319 | r_image = r_gradient.expand(batch_size, height, width, 1) 1320 | g_image = g_gradient.expand(batch_size, height, width, 1) 1321 | b_image = b_gradient.expand(batch_size, height, width, 1) 1322 | rgb_image = torch.cat((r_image, g_image, b_image), dim=-1) 1323 | 1324 | mask_gradient = gradient.expand(1, height, width, 1).squeeze(-1) 1325 | 1326 | return (rgb_image,mask_gradient,) 1327 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [project] 2 | name = "pre_cfg_comfy_nodes_for_comfyui" 3 | description = "A set of nodes to prepare the noise predictions before the CFG function" 4 | version = "1.0.0" 5 | license = "LICENSE" 6 | 7 | [project.urls] 8 | Repository = "https://github.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI" 9 | # Used by Comfy Registry https://comfyregistry.org 10 | 11 | [tool.comfy] 12 | PublisherId = "extraltodeus" 13 | DisplayName = "pre_cfg_comfy_nodes_for_ComfyUI" 14 | Icon = "" 15 | -------------------------------------------------------------------------------- /red_to_blue_gradient_latent_guidance_workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/967b1816462a3f9834887a6295e2daea838d0b62/red_to_blue_gradient_latent_guidance_workflow.png -------------------------------------------------------------------------------- /spiral.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/967b1816462a3f9834887a6295e2daea838d0b62/spiral.jpg -------------------------------------------------------------------------------- /workflows/playground_abunchofstuff.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/967b1816462a3f9834887a6295e2daea838d0b62/workflows/playground_abunchofstuff.png -------------------------------------------------------------------------------- /workflows/playground_new_autocfg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/967b1816462a3f9834887a6295e2daea838d0b62/workflows/playground_new_autocfg.png -------------------------------------------------------------------------------- /workflows/playground_precfgpag.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Extraltodeus/pre_cfg_comfy_nodes_for_ComfyUI/967b1816462a3f9834887a6295e2daea838d0b62/workflows/playground_precfgpag.png --------------------------------------------------------------------------------