Giter Site home page Giter Site logo

cgem156-comfyui's Introduction

cgem156-ComfyUI🍌

ComfyUIのカスタムノードをまとめました。

ノードは全てcgem156🍌の中にあります。

使い方はそれぞれのリンクにあります(多分)。

一部機能はComfyUI-Custom-ScriptsShowTextノードを利用することを前提にしています。

Features

機能のリンク 説明 引用
attention_couple プロンプトの領域を指定するノード。 https://note.com/gcem156/n/nb3d516e376d7
batch_condition 文字列をバッチで扱うノード。
cd_tuner cd_tunerの一部機能のみ実装したノード。 https://github.com/hako-mikan/sd-webui-cd-tuner
custom_samplers オリジナルのサンプラー リンク先参照
custom_schedulers オリジナルのスケジューラー
dart Dartを使うためのノード。 https://huggingface.co/p1atdev/dart-v1-sft
lora_merger LoRAのマージノード。
multiple_lora_loader 複数LoRAを同時に適用するためのノード。
lortnoc 謎のControlNet。 https://note.com/gcem156/n/n82067cbdeda3
scale_crafter [ScaleCrafter]の実装(一部)。 https://github.com/YingqingHe/ScaleCrafter
aesthetic_shadow アニメスタイル画像の品質評価モデル? https://huggingface.co/shadowlilac/aesthetic-shadow-v2
wd-tagger WD-Taggerのv3 https://huggingface.co/SmilingWolf, https://github.com/neggles/wdv3-timm

references

ノードの作成方法として以下のリポジトリや記事を参考にしています。

https://github.com/pythongosssss/ComfyUI-Custom-Scripts

https://note.com/nyaoki_board/n/na7c54c9ae2a5

cgem156-comfyui's People

Contributors

laksjdjf avatar

Stargazers

 avatar cthl avatar  avatar  avatar  avatar  avatar LisaiZhang avatar nekoflyneko avatar  avatar  avatar  avatar  avatar  avatar Simeon Dimitrov avatar  avatar  avatar  avatar  avatar  avatar Yusuf Elnady avatar naporitan avatar mktn avatar  avatar  avatar negitoro avatar  avatar itorash avatar  avatar Yifei(Frank) ZHU avatar  avatar Xu Yang avatar  avatar  avatar toyxyz avatar Cesarkon avatar VALADI K JAGANATHAN avatar syddharth avatar Kohya S. avatar  avatar  avatar Shiimizu avatar  avatar JackEllie avatar Filip Andersson avatar ExponentialML avatar  avatar

Watchers

 avatar  avatar

cgem156-comfyui's Issues

Error while trying to merge two loras: mismatch tensor

The error I received:

Error occurred when executing LoraMerger|cgem156:

The size of tensor a (64) must match the size of tensor b (32) at non-singleton dimension 1

File "C:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\cgem156-ComfyUI\scripts\lora_merger\merge.py", line 46, in lora_merge
lora = self.merge(lora_1, lora_2, mode, rank, threshold, device, dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\name\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\cgem156-ComfyUI\scripts\lora_merger\merge.py", line 98, in merge
up = up_1 + up_2

afbeelding

donation?

Hey @laksjdjf do you have a paypal account? I'd like to send you something for all your hard work, especially the initial code for the IPAdapter. Please contact me if you want, you find my contact info in my profile too

Cannot Import into Custom Nodes

I get the following error when trying to import cgem156-ComfyUI custom nodes using ComfyUI manager or Git Clone.

This is the error:

Traceback (most recent call last):
  File "/Users/yelnady/Data/ComfyUI/nodes.py", line 1879, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/yelnady/Data/ComfyUI/custom_nodes/cgem156-ComfyUI/__init__.py", line 35, in <module>
    module = importlib.import_module(f"custom_nodes.cgem156-ComfyUI.scripts.{script}")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1128, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1128, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/yelnady/Data/ComfyUI/custom_nodes/cgem156-ComfyUI/__init__.py", line 35, in <module>
    module = importlib.import_module(f"custom_nodes.cgem156-ComfyUI.scripts.{script}")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/yelnady/Data/ComfyUI/custom_nodes/cgem156-ComfyUI/scripts/dart/__init__.py", line 1, in <module>
    from .node import LoadDart, DartPrompt, DartConfig, DartGenerate, BanTags
  File "/Users/yelnady/Data/ComfyUI/custom_nodes/cgem156-ComfyUI/scripts/dart/node.py", line 2, in <module>
    from transformers.generation.logits_process import UnbatchedClassifierFreeGuidanceLogitsProcessor
ImportError: cannot import name 'UnbatchedClassifierFreeGuidanceLogitsProcessor' from 'transformers.generation.logits_process' (/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/transformers/generation/logits_process.py)

Cannot import /Users/yelnady/Data/ComfyUI/custom_nodes/cgem156-ComfyUI module for custom nodes: cannot import name 'UnbatchedClassifierFreeGuidanceLogitsProcessor' from 'transformers.generation.logits_process' (/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/transformers/generation/logits_process.py)

Thank you for the help!

Sizes of tensors must match error

When I have a simple background mask prompt, and a complex foreground mask prompt, and I merge them and run them through Load Attention couple and into my KSampler, I get an error (see below).

My Background prompt:

glamorous photo of Ronda, Spain on a beautiful day in the style of Annie Leibovitz, 8K, medium format, perfect photograph

My Foreground prompt:

full body shot, fashion photograph of Britney Spears wearing (jeans :1.5), (suit vest :1.1), standing in a relaxed pose, (pillbox hat :1.2), long pants, beautiful eyes, (perfect legs:1.5), six-pack abs, white teeth, perfect skin, detailed skin, sweaty skin, sexy, best composition, in the style of Annie Leibovitz, high fashion, luxurious, extravagant, stylish, sensual, sharp focus, 4K, opulent, elegance, stunning beauty, professional, high contrast, highly detailed, fashion photography

My Negative prompt:

bikini, mini-skirt, out of frame, no head, cropped person, bad composition, monochrome, bad quality, deformed, bad anatomy, ugly, extra limbs, extra fingers, (interlocked fingers), twins, 2girls, text, technicolor hair

If I cut out the Load Attention couple node everything works as expected (if you were expecting an unattractive result). Here's the section of my workflow that matters (there's a huge chunk that generates a random setting, photographer, etc. that I haven't included, and I've trimmed out the upscale section as well):

Attention error

ERROR:root:Traceback (most recent call last):
  File "E:\SD\Packages\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "E:\SD\Packages\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "E:\SD\Packages\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "E:\SD\Packages\ComfyUI\nodes.py", line 1368, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "E:\SD\Packages\ComfyUI\nodes.py", line 1338, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "E:\SD\Packages\ComfyUI\comfy\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 700, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 605, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 544, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 282, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 272, in forward
    return self.apply_model(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 269, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 249, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "E:\SD\Packages\ComfyUI\comfy\samplers.py", line 223, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "E:\SD\Packages\ComfyUI\comfy\model_base.py", line 95, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 849, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "E:\SD\Packages\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 43, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\comfy\ldm\modules\attention.py", line 632, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\SD\Packages\ComfyUI\comfy\ldm\modules\attention.py", line 459, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
  File "E:\SD\Packages\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint
    return func(*inputs)
  File "E:\SD\Packages\ComfyUI\comfy\ldm\modules\attention.py", line 556, in _forward
    n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)
  File "E:\SD\Packages\ComfyUI\custom_nodes\attention-couple-ComfyUI\attention_couple.py", line 120, in patch
    context_cond = torch.cat([cond for cond in self.negative_positive_conds[1]], dim=0)
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 1 in the list.```

What attention_scale.py does and how it works?

HI!! Thanks for Attention Couple, really useful!

Regarding Attention Scale, is it supposed to work with Attention Couple when doing a HiRes Fix pass?

If so, what do the values mean, and how it is wired?

Ksampler Error with `--gpu-only` command

Hello,

The Attention Couple is working perfectly. But when I try to run ComfyUI using python3 main.py --gpu-only, I get the following error on the Ksampler node:

`Error occurred when executing KSampler:

k and v must be the same.

File "/Users/yelnady/Data/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/nodes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 761, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 663, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 650, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 629, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/torch/utils/contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/k_diffusion/sampling.py", line 542, in sample_dpmpp_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 272, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 616, in call
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 619, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 258, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/samplers.py", line 218, in calc_cond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/model_base.py", line 97, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 852, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 44, in forward_timestep_embed
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/ldm/modules/attention.py", line 644, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/.pyenv/versions/3.11.3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/comfy/ldm/modules/attention.py", line 555, in forward
n, context_attn2, value_attn2 = p(n, context_attn2, value_attn2, extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/yelnady/Data/ComfyUI/custom_nodes/cgem156-ComfyUI/scripts/attention_couple/node.py", line 64, in attn2_patch
assert k.mean() == v.mean(), "k and v must be the same.
"`

Here's my workflow:
Screenshot 2024-06-05 at 12 04 44 PM

Update: The error comes from assert k.mean() == v.mean(), because they become nan. I tried to debug the attention couple node, but in vain.

Load LoRA Weight OnlyノードでSDXLのlbwを指定するとエラー

Load LoRA Weight OnlyノードでSDXLのlbwを指定するとエラーが発生します。

cap

LBW17TO26 = [2, 5, 8, 11, 12, 13, 15, 16, 17]
LBW12TO20 = [2, 3, 4, 5, 8, 18, 19, 20]

SDXLのlbwから通常のlbwへのblock変換用配列のindexが1ずつずれていると思います。

LBW17TO26 = [1, 4, 7, 10, 11, 12, 14, 15, 16]
LBW12TO20 = [1, 2, 3, 4, 7, 17, 18, 19]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.