Giter Site home page Giter Site logo

jags111 / efficiency-nodes-comfyui Goto Github PK

View Code? Open in Web Editor NEW

This project forked from lucianocirino/efficiency-nodes-comfyui

707.0 5.0 74.0 329.98 MB

A collection of ComfyUI custom nodes.- Awesome smart way to work with nodes!

Home Page: https://civitai.com/models/32342

License: GNU General Public License v3.0

JavaScript 23.62% Python 75.99% CSS 0.39%
comfyui custom nodes sdxl

efficiency-nodes-comfyui's Introduction

✨🍬Planning to help this branch stay alive and any issues will try to solve or fix .. But will be slow as I run many github repos . before raising any issues, please update comfyUI to the latest and esnure all the required packages are updated ass well. Share your workflow in issues to retest same at our end and update the patch.🍬

Efficiency Nodes for ComfyUI Version 2.0+

A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count.

Releases

Please check out our WIKI for any use cases and new developments including workflow and settings.
Efficiency Nodes Wiki

Nodes:

Efficient Loader & Eff. Loader SDXL
  • Nodes that can load & cache Checkpoint, VAE, & LoRA type models. (cache settings found in config file 'node_settings.json')
  • Able to apply LoRA & Control Net stacks via their lora_stack and cnet_stack inputs.
  • Come with positive and negative prompt text boxes. You can also set the way you want the prompt to be encoded via the token_normalization and weight_interpretation widgets.
  • These node's also feature a variety of custom menu options as shown below.

    note: "🔍 View model info..." requires ComfyUI-Custom-Scripts to be installed to function.

  • These loaders are used by the XY Plot node for many of its plot type dependencies.

KSampler (Efficient), KSampler Adv. (Efficient), KSampler SDXL (Eff.)
  • Modded KSamplers with the ability to live preview generations and/or vae decode images.
  • Feature a special seed box that allows for a clearer management of seeds. (-1 seed to apply the selected seed behavior)
  • Can execute a variety of scripts, such as the XY Plot script. To activate the script, simply connect the input connection.

           

Script Nodes
  • A group of node's that are used in conjuction with the Efficient KSamplers to execute a variety of 'pre-wired' set of actions.

  • Script nodes can be chained if their input/outputs allow it. Multiple instances of the same Script Node in a chain does nothing.

    XY Plot
    • Node that allows users to specify parameters for the Efficiency KSamplers to plot on a grid.

    HighRes-Fix
    • Node that the gives user the ability to upscale KSampler results through variety of different methods.
    • Comes out of the box with popular Neural Network Latent Upscalers such as Ttl's ComfyUi_NNLatentUpscale and City96's SD-Latent-Upscaler.
    • Supports ControlNet guided latent upscaling. (You must have Fannovel's comfyui_controlnet_aux installed to unlock this feature)
    • Local models---The node pulls the required files from huggingface hub by default. You can create a models folder and place the modules there if you have a flaky connection or prefer to use it completely offline, it will load them locally instead. The path should be: ComfyUI/custom_nodes/efficiency-nodes-comfyui/models; Alternatively, just clone the entire HF repo to it: (git clone https://huggingface.co/city96/SD-Latent-Upscaler) to ComfyUI/custom_nodes/efficiency-nodes-comfyui/models

    Noise Control
    • This node gives the user the ability to manipulate noise sources in a variety of ways, such as the sampling's RNG source.
    • The CFG Denoiser noise hijack was developed by smZ, it allows you to get closer recreating Automatic1111 results.
    • Note: The CFG Denoiser does not work with a variety of conditioning types such as ControlNet & GLIGEN

    • This node also allows you to add noise Seed Variations to your generations.
    • For trying to replicate Automatic1111 images, this node will help you achieve it. Encode your prompt using "length+mean" token_normalization with "A1111" weight_interpretation, set the Noise Control Script node's rng_source to "gpu", and turn the cfg_denoiser to true.

    Tiled Upscaler
    • The Tiled Upscaler script attempts to encompas BlenderNeko's ComfyUI_TiledKSampler workflow into 1 node.
    • Script supports Tiled ControlNet help via the options.
    • Strongly recommend the preview_method be "vae_decoded_only" when running the script.

    AnimateDiff
    • To unlock the AnimateDiff script it is required you have installed Kosinkadink's ComfyUI-AnimateDiff-Evolved.
    • The latent batch_size when running this script becomes your frame count.

Image Overlay
  • Node that allows for flexible image overlaying. Works also with image batches.

SimpleEval Nodes
  • A collection of nodes that allows users to write simple Python expressions for a variety of data types using the simpleeval library.
  • To activate you must have installed the simpleeval library in your Python workspace.
  • pip install simpleeval

       

Latent Upscale nodes
  • Forked from NN latent this node provides some remarkable neural enhancement to the latents making scaling a cool task
  • Both NN latent upscale and Latent upscaler does the Latent improvemnet in remarkable ways. If you face any issue regarding same please install the nodes from this link([SD-Latent-Upscaler](https://github.com/city96/SD-Latent-Upscaler) and the NN latent upscale from [ComfyUI_NNlatentUpscale](https://github.com/Ttl/ComfyUi_NNLatentUpscale)

       

Workflow Examples:

Kindly load all PNG files in same name in the (workflow driectory) to comfyUI to get all this workflows. The PNG files have the json embedded into them and are easy to drag and drop !

  1. HiRes-Fixing

  2. SDXL Refining & Noise Control Script

  3. XY Plot: LoRA model_strength vs clip_strength

  4. Stacking Scripts: XY Plot + Noise Control + HiRes-Fix

  5. Stacking Scripts: HiRes-Fix (with ControlNet)

  6. SVD workflow: Stable Video Diffusion + Kohya Hires* (with latent control)


Dependencies

The python library simpleeval is required to be installed if you wish to use the Simpleeval Nodes.

pip install simpleeval

Also can be installed with a simple pip command
'pip install simpleeval'

A single file library for easily adding evaluatable expressions into python projects. Say you want to allow a user to set an alarm volume, which could depend on the time of day, alarm level, how many previous alarms had gone off, and if there is music playing at the time.

check Notes for more information.

Install:

To install, drop the "efficiency-nodes-comfyui" folder into the "...\ComfyUI\ComfyUI\custom_nodes" directory and restart UI.

Todo

[ ] Add guidance to notebook

Comfy Resources

Efficiency Linked Repos

Guides:

If you create a cool image with our nodes, please show your result and message us on twitter at @jags111 or @NeuralismAI .

You can join the NEURALISM AI DISCORD or JAGS AI DISCORD Share your work created with this model. Exchange experiences and parameters. And see more interesting custom workflows.

Support us in Patreon for more future models and new versions of AI notebooks.

My buymeacoffee.com pages and links are here and if you feel you are happy with my work just buy me a coffee !

coffee for JAGS AI

Thank you for being awesome!

efficiency-nodes-comfyui's People

Contributors

alexopus avatar dnl13 avatar drjkl avatar edgargracia avatar haohaocreates avatar idrirap avatar jags111 avatar jaredtherriault avatar jteijema avatar karrycharon avatar larsupb avatar ltdrdata avatar lucianocirino avatar mijago avatar nidefawl avatar pgadoury avatar philhk avatar rgthree avatar shinihime avatar shiny1708 avatar slouffka avatar spinagon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

efficiency-nodes-comfyui's Issues

Change default tiling size to 320

If someone is using one of the tile options in a Ksampler vae_decode setting, it is due to memory restraints.

Consider lowering the default tile size from 512 to 320 in lines 454 and 457 of efficiency_nodes.py

        def vae_decode_latent(vae, samples, vae_decode):
            return VAEDecodeTiled().decode(vae,samples,320)[0] if "tiled" in vae_decode else VAEDecode().decode(vae,samples)[0]

        def vae_encode_image(vae, pixels, vae_decode):
            return VAEEncodeTiled().encode(vae,pixels,320)[0] if "tiled" in vae_decode else VAEEncode().encode(vae,pixels)[0]

Can`t use sd_xl_refiner_1.0.safetensors in Eff. Loader SDXL as refiner model

I don't know what happened, it used to work in version 8f62e4a. And if I use other SDXL models as refiner model except sd_xl_refiner_1.0.safetensors, this error won`t occur.
here are my error log:

FETCH DATA from: /root/autodl-tmp/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json
got prompt
Requested to load SDXLClipModel
Loading 1 new model
Requested to load SDXLRefinerClipModel
Loading 1 new model
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/root/autodl-tmp/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 236, in efficientloaderSDXL
    return super().efficientloader(base_ckpt_name, vae_name, clip_skip, lora_name, lora_model_strength, lora_clip_strength,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 172, in efficientloader
    encode_prompts(positive, negative, token_normalization, weight_interpretation, clip, clip_skip,
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 80, in encode_prompts
    refiner_positive_encoded = bnk_adv_encode.AdvancedCLIPTextEncode().encode(refiner_clip, positive_prompt, token_normalization, weight_interpretation)[0]
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_adv_encode.py", line 295, in encode
    embeddings_final, pooled = advanced_encode(clip, text, token_normalization, weight_interpretation, w_max=1.0,
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_adv_encode.py", line 254, in advanced_encode
    return advanced_encode_from_tokens(tokenized['l'],
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_adv_encode.py", line 183, in advanced_encode_from_tokens
    weighted_emb, pooled_base = encode_func(weighted_tokens)
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_adv_encode.py", line 257, in <lambda>
    lambda x: encode_token_weights(clip, x, encode_token_weights_l),
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_adv_encode.py", line 229, in encode_token_weights
    return encode_func(model.cond_stage_model, token_weight_pairs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_adv_encode.py", line 221, in encode_token_weights_l
    l_out, _ = model.clip_l.encode_token_weights(token_weight_pairs)
               ^^^^^^^^^^^^
  File "/root/miniconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'SDXLRefinerClipModel' object has no attribute 'clip_l'

Here is my workflow which worked will in version 8f62e4a
8f62e4a3
Here is the same workflow but didn't work in last version 19b6664
19b6664

Changing the values of inputs in a "Control Net Stacker" node does not trigger the regeneration

Hello everyone,

I've noticed that modifying the values of inputs in a "Control Net Stacker" node does not initiate the generation of a new image. However, modifying the inputs of an "Apply ControlNet (Advanced)" node does trigger a new generation.

The workflow used is very basic:
A "Control Net Stacker" connected to an "Eff. Loader SDXL", linked to a "KSampler SDXL (Eff.)" and finally an "Image preview".

Did I miss something?

ControlNet Stack input increases KSampler Advanced Efficient image generation time proportionally to the number of active controlnets

In the scenario below, I enabled a CR Multi-ControlNet Stack node with 3 controlnets (Canny, Depth, and OpenPose). The stack conditions a KSampler Advanced Efficient node.

Screenshot 2023-10-30 at 11 18 17

Doing so enormously increases the generation time: all things equal, the SDXL base model goes from ~3min30s / image (I'm on a Mac...) to ~8min50s / image.

If I modify the stack to only use 2 controlnets, the generation time goes down to ~6min36s / image.

With 1 controlnet only, the generation time goes down to ~4min51s / image.

Is this behavior normal? I don't recall ControlNet influencing generation time in A1111 or SD Next, even when I used 3 at a time, like in this case.

(I noticed this a while ago. It's not something new. I just didn't report it before because I wasn't certain it's an anomaly)

Cannot find or use the NNLatentUpscale Node

image

I m trying to load this upscale workflow from Latent Vision, then i noticed a problem.
upscalecomparison.json

I cannot seems to load this "NNLatentUpscale" node for some reason. I dont think my custom node has conflict with this one and only node right? Please show me the way what I should to do to get it working again.

I have tried removing all my custom node, and this problem still appears
image

Is this node no longer supported? NNLatentUpscale is an awesome upscaler tool...

Thanks in advance!

"X and Y input types must be different" prohibits a valid usage

First of all, thank you for continuing to make these nodes available and developing!

I would really like to plot a mix of two Lora at various strengths to see what combination works best, but there is a hard check that the input types are different which prevents this. Could this limitation be lifted?

Not an Issue, but a Thank you!!!

Dude, thanks for forking this, as I love these nodes and was sad to see that Luciano wouldn't be maintaining them. I was about to rip/replace the use of these from my Workflows but then saw in the issue post there that you forked it. Will throw some support your way come payday.

There are no hold, script, or sampler options in ksampler

Why cancel the hold function? Sometimes it is very useful, especially when there are many steps, I can adjust in the middle step to avoid re-generating the previous process and save a lot of time. This is a very useful function, especially in very complex workflow

Is it by design that the Ksampler ignores the target width/height of the clip encoders?

UPDATE: for some bizarre reason it seems to work if i restart everything. I suspect it's to do with the way confy actually reads the diagram "backwards" as explained on the R3 page regarding control nodes. If you don't refresh the whole thing, comfy considers the image 'alread rendered' despite a diagnosit dump showing the tuples has received the refreshed values. It's not Efficiency's node fault, but it's confusing because we tend to think 'left to right'.


{no longer relevant}

I'm trying to use the SDXL efficient KSsampler WITHOUT the efficient SDXL loader (i want my own workflow, but i love your x/y nodes).
When you change the width/height/target width/target height in the clip encoder, I spotted that confy stopped refreshing my image - instead it continues in the next nodes.

I then replaced the SDXL efficient KSsampler with the built-in one, and noticed that indeed, it would refresh when i changed either one or all of those values.

Given that the differenced in image quality is nill, I'm not bothered, but it does change the 'look' of the image as the way SDXL is trained has to do with cropping, buckets and so on - so i'm wondering if this was done on purpose and if you coudl kindly explain why :)

Thank you!

Suggestion: Improved handling of batches in XY plots

Hi, I just wanted to start by thanking you for keeping this alive, it means a lot.

The suggestion I'm submitting is about allowing a batch size of multiple latents to be utilized as an individual cell within plots.
This brings several improvements but hardly any drawbacks:

  • It enables the creation of 3d plots of sorts, in which you have an X dimension and a Y dimension of choice and then a Z dimension of the latent batch.
  • It makes XY plots that already use batch size faster by generating images in parallel; From my testing, batch size 5 is around a 1.5x speedup over batch size 1, without taking the slight delay between generated images into account.
  • This can potentially coexist with the already present Seed++ batch, as from what I've seen Comfy generates all images in a batch starting from a single initial seed.

As a practical reference for how I envision the feature, I'd suggest taking a look at the default X/Y/Z plot feature in Automatic1111.
Just an example:

sample

[FEATURE REQUEST] - Efficient Loader supporting positive text_g & text_l as well as negative text_g & text_l

Summary

At the moment the Efficient Loader node only accepts one positive prompt input and one negative prompt input.
However, we know that SDXL allows the splitting of each prompt into a text_g and text_l values, each going to different CLIP nodes for encoding.

I assume that, at today, the Efficient Loader node simply copies the same positive prompt for text_g and text_l values, and does the same with the negative prompt. Which is exactly what the overwhelming majority of SD users are doing.

However, there might be value in differentiating the inputs for text_g and text_l. See, for example, this conversation (including my reply and others):
https://www.reddit.com/r/comfyui/comments/189jj4e/how_do_i_push_details_in_clip_g_sdxl/

Basic Example

I'm proposing that the Efficient Loader node features 4 prompt inputs instead of two:

  • positive text_g
  • positive text_l
  • positive text_g
  • negative text_l

When the Loader is dealing with an SD1.5 checkpoint, it would simply ignore the secondary text input.

Reference Issues.

No response

'SDXLRefinerClipModel' object has no attribute 'clip_l'

When using a SDXL refiner I get the error:

Error occurred when executing Eff. Loader SDXL:

'SDXLRefinerClipModel' object has no attribute 'clip_l'

I use the example workflow "SDXL Refining & Noise Control Script" found in this repo

Want to increase file hierarchy

image
I hope that the file hierarchy can be increased when loading the model, because it is very troublesome to query after there are too many models.

AttributeError: module 'comfy.model_management' has no attribute 'batch_area_memory'

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/home/llm/ComfyUI/execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/llm/ComfyUI/execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/llm/ComfyUI/execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/llm/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 2215, in sample_sdxl
return super().sample(sdxl_tuple, noise_seed, steps, cfg, sampler_name, scheduler,
File "/home/llm/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 700, in sample
samples, images, gifs, preview = process_latent_image(model, seed, steps, cfg, sampler_name, scheduler,
File "/home/llm/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 634, in process_latent_image
samples = TSampler().sample(model, tile_seed, tile_size, tile_size, tiling_strategy, tiling_steps, cfg,
File "/home/llm/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_tiled_samplers.py", line 314, in sample
return sample_common(model, 'enable', seed, tile_width, tile_height, tiling_strategy, steps_total, cfg, sampler_name, scheduler, positive, negative, latent_image, steps_total-steps, steps_total, 'disable', denoise=1.0, preview=True)
File "/home/llm/ComfyUI/custom_nodes/efficiency-nodes-comfyui/py/bnk_tiled_samplers.py", line 126, in sample_common
comfy.model_management.load_models_gpu([model] + modelPatches, comfy.model_management.batch_area_memory(noise.shape[0] * noise.shape[2] * noise.shape[3]) + inference_memory)
AttributeError: module 'comfy.model_management' has no attribute 'batch_area_memory'

after update all, my workflow show error.

[not an issue] Tiled Upscale Script not working! & sdxl controlnet ?

For me Tiled Upscale Script was not working,
got it fixed by reimplementing https://github.com/BlenderNeko/ComfyUI_TiledKSampler

but still the ControlNet for SDXL is not working but searching for a fix.
SD1.5 working fine.

Also I have seen this issue #6
which seems to be related with a missing new implementation of https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb

I am still trying to fix the Tiled Upscale Script sdxl controlnet model selection on my fork.
Just wanted to let you know that i will send a push req if i get to work.
Also feel free to help

tl;dr
sdxl refiner issue #6 fixed
Tiled Upscale Script works again but only controlnet sd 1.5 for now

edited:
btw i dont know if there is a controlnet tile sdxl model
maybe there is nothing to fix ?

Changes to aesthetic scores don't trigger new generation with KSampler Advanced until ComfyUI restart

I'm not 100% sure this is about the KSampler Advanced node, but I'll flag it anyway.

In the following situation, with a fixed seed, a change in the aesthetic score values entering the CLIPTextEncodeSDXLRefiner node, won't push the KSampler Advanced node to generate a new refined image. It will simply regenerate the same image, as if no parameter was changed.

Screenshot 2023-10-26 at 10 23 55

To force the generation of an image that takes into account the modified aesthetic score values (all other things equal), I have to interrupt ComfyUI and restart it. As if something gets cached when it shouldn't*.

*I have a similar problem when I change the VAE in the Efficient Loader node: if I change the VAE setting from baked VAE to a specific VAE file, that is taken into account during the new generation. However, if I revert back to backed VAE for the subsequent generation, ComfyUI still tells me that it's using the specific VAE file. I have to interrupt ComfyUI and restart it to see the correct VAE setting being picked up.

It feels like something is aggressively cached when it shouldn't.


I recommend reading the follow-up comments of other users in the original Issue as, apparently, this problem manifests itself in many different ways.

Request: Multiple prompt plots

I would like to be able the connect multiple XY Input: Prompt S/R nodes like this

chrome_j9TjdnxsBv

and get (in this case) 4 images

armor knight
leather mage
armor knight
leather mage

[BUG] Latest update broke the way the custom node handles the Seed

Hey,
Not sure whether it's due to ComfyUI update or that custom node, but after updating everything, the behavior of the Efficient Sampler is weird. It switches between random and fixed seed on its own, and what it shows visually doesn't correspond to what is programmed to happen. For example, if I see a big number, I assume the Seed is fixed. But if I generate something, suddenly it's actually doing a random generation. Another related issue: if I put -1 in the Seed box, then I click on any other node, suddenly the Seed becomes a fixed one.

What happened?

Efficiency nodes import failed with ComfyUI

Has this issue been opened before?
I can't say, as I installed the UI for the first time. But there are Rddit posts that also deal with this error. However, the solutions did not work for me.
https://www.reddit.com/r/comfyui/comments/17n7etb/efficiency_nodes_import_failed/
https://www.reddit.com/r/comfyui/comments/14vjuvi/custom_nodes_import_failed/

Describe the bug
I cannot install the efficiency node. The following error occurs when loading:
webui-docker-comfy-1 | Traceback (most recent call last):
webui-docker-comfy-1 | File "/stable-diffusion/nodes.py", line 1698, in load_custom_node
webui-docker-comfy-1 | module_spec.loader.exec_module(module)
webui-docker-comfy-1 | File "", line 883, in exec_module
webui-docker-comfy-1 | File "", line 241, in _call_with_frames_removed
webui-docker-comfy-1 | File "/stable-diffusion/custom_nodes/efficiency-nodes-comfyui/init.py", line 9, in
webui-docker-comfy-1 | from .efficiency_nodes import NODE_CLASS_MAPPINGS
webui-docker-comfy-1 | File "/stable-diffusion/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 46, in
webui-docker-comfy-1 | from .py import smZ_cfg_denoiser
webui-docker-comfy-1 | File "/stable-diffusion/custom_nodes/efficiency-nodes-comfyui/py/smZ_cfg_denoiser.py", line 7, in
webui-docker-comfy-1 | from comfy.samplers import KSampler, KSamplerX0Inpaint, wrap_model
webui-docker-comfy-1 | ImportError: cannot import name 'wrap_model' from 'comfy.samplers' (/stable-diffusion/comfy/samplers.py)
webui-docker-comfy-1 |
webui-docker-comfy-1 | Cannot import /stable-diffusion/custom_nodes/efficiency-nodes-comfyui module for custom nodes: cannot import name 'wrap_model' from 'comfy.samplers' (/stable-diffusion/comfy/samplers.py)
webui-docker-comfy-1 | ### Loading: ComfyUI-Manager (V1.6.4)
webui-docker-comfy-1 | ### ComfyUI Revision: 1376 [7e941f9f] | Released on '2023-08-30'
webui-docker-comfy-1 | ### Loading: ComfyUI-Impact-Pack (V4.38.2)
webui-docker-comfy-1 | ### Loading: ComfyUI-Impact-Pack (Subpack: V0.3.2)
webui-docker-comfy-1 | WAS Node Suite: OpenCV Python FFMPEG support is enabled
webui-docker-comfy-1 | WAS Node Suite Warning: ffmpeg_bin_path is not set in /stable-diffusion/custom_nodes/was-node-suite-comfyui/was_suite_config.json config file. Will attempt to use system ffmpeg binaries if available.
webui-docker-comfy-1 | WAS Node Suite: Finished. Loaded 197 nodes successfully.
webui-docker-comfy-1 |
webui-docker-comfy-1 | "Success is not final, failure is not fatal: It is the courage to continue that counts." - Winston Churchill
webui-docker-comfy-1 |
webui-docker-comfy-1 | ### Loading: ComfyUI-Manager (V1.6.4)
webui-docker-comfy-1 | ### ComfyUI Revision: 1376 [7e941f9f] | Released on '2023-08-30'
webui-docker-comfy-1 | Traceback (most recent call last):
webui-docker-comfy-1 | File "/stable-diffusion/nodes.py", line 1698, in load_custom_node
webui-docker-comfy-1 | module_spec.loader.exec_module(module)
webui-docker-comfy-1 | File "", line 883, in exec_module
webui-docker-comfy-1 | File "", line 241, in _call_with_frames_removed
webui-docker-comfy-1 | File "/data/config/comfy/custom_nodes/efficiency-nodes-comfyui/init.py", line 9, in
webui-docker-comfy-1 | from .efficiency_nodes import NODE_CLASS_MAPPINGS
webui-docker-comfy-1 | File "/data/config/comfy/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 46, in
webui-docker-comfy-1 | from .py import smZ_cfg_denoiser
webui-docker-comfy-1 | File "/stable-diffusion/custom_nodes/efficiency-nodes-comfyui/py/smZ_cfg_denoiser.py", line 7, in
webui-docker-comfy-1 | from comfy.samplers import KSampler, KSamplerX0Inpaint, wrap_model
webui-docker-comfy-1 | ImportError: cannot import name 'wrap_model' from 'comfy.samplers' (/stable-diffusion/comfy/samplers.py)
webui-docker-comfy-1 |
webui-docker-comfy-1 | Cannot import /data/config/comfy/custom_nodes/efficiency-nodes-comfyui module for custom nodes: cannot import name 'wrap_model' from 'comfy.samplers' (/stable-diffusion/comfy/samplers.py)
webui-docker-comfy-1 |
webui-docker-comfy-1 | Import times for custom nodes:
webui-docker-comfy-1 | 0.0 seconds: /stable-diffusion/custom_nodes/ComfyUI_UltimateSDUpscale
webui-docker-comfy-1 | 0.1 seconds (IMPORT FAILED): /stable-diffusion/custom_nodes/efficiency-nodes-comfyui
webui-docker-comfy-1 | 0.1 seconds: /stable-diffusion/custom_nodes/ComfyUI-Manager
webui-docker-comfy-1 | 0.1 seconds: /data/config/comfy/custom_nodes/ComfyUI-Manager
webui-docker-comfy-1 | 0.1 seconds (IMPORT FAILED): /data/config/comfy/custom_nodes/efficiency-nodes-comfyui
webui-docker-comfy-1 | 1.8 seconds: /stable-diffusion/custom_nodes/was-node-suite-comfyui
webui-docker-comfy-1 | 2.0 seconds: /stable-diffusion/custom_nodes/ComfyUI-Impact-Pack
webui-docker-comfy-1 |
webui-docker-comfy-1 | Starting server
webui-docker-comfy-1 |
webui-docker-comfy-1 | To see the GUI go to: http://0.0.0.0:7860/

Which UI = comfy with gpu

Hardware / Software

OS: Windows 10
OS version: Build 19045-3693
WSL version (if applicable): Version 2
Docker Version: 4.25.2
Docker compose version: v2.23.0-desktop.1
Repo version:
RAM: 128 GB
GPU/VRAM: Tesla p40 with 24GB

I don't know if the bug is correct here. I cannot evaluate whether it is because it is modul version 2 and this no longer harmonizes with the Docker build.

So here is a request for help.
Thank you for your efforts.

KSampler disables preview for the whole workflow

Hi,

I think there's a bug in the node somewhere, as it disables the previews for all other KSamplers and nodes after I run it like this:
image

So basically I don't get any previews until I either restart ComfyUI or change the setting in Comfy Manager.
When I "Set node to never" and use the normal KSampler in it's place it works just fine. Running the sampler from the screenshot just disables the previews permanently when doing a simple img2img upscaling.

Batch of images to Image Overlay fails

If I send a batch of images as the overlay_image input of the Image Overlay node I get this error:

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "D:\Software\AI Generators\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 3089, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 281, 3), '|u1')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "D:\Software\AI Generators\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Software\AI Generators\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Software\AI Generators\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Software\AI Generators\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 3887, in apply_overlay_image
overlay_image = tensor2pil(overlay_image)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Software\AI Generators\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 79, in tensor2pil
return Image.fromarray(np.clip(255. * image.cpu().numpy().squeeze(), 0, 255).astype(np.uint8))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Software\AI Generators\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 3092, in fromarray
raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 281, 3), |u1

I've included a file that can be used to reproduce the failure.

Error.json

Efficient Loader Error

Efficient Loader Error created by default values, only model was changed. Now this error occurs always... in all new workflows and I cannot use this. I tried to uninstall-reboot-install-reboot-test with ComfyUI manager a few times... with same results,

Error occurred when executing Efficient Loader:

list indices must be integers or slices, not str

File "D:\Utilidades\Stable Difussion\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\Utilidades\Stable Difussion\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\Utilidades\Stable Difussion\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\Utilidades\Stable Difussion\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 172, in efficientloader
encode_prompts(positive, negative, token_normalization, weight_interpretation, clip, clip_skip,
File "D:\Utilidades\Stable Difussion\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 73, in encode_prompts
positive_encoded = bnk_adv_encode.AdvancedCLIPTextEncode().encode(clip, positive_prompt, token_normalization, weight_interpretation)[0]
File "D:\Utilidades\Stable Difussion\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\py\bnk_adv_encode.py", line 312, in encode
embeddings_final, pooled = advanced_encode(clip, text, token_normalization, weight_interpretation, w_max=1.0,
File "D:\Utilidades\Stable Difussion\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\py\bnk_adv_encode.py", line 262, in advanced_encode
return advanced_encode_from_tokens(tokenized['l'],

Feature Request: SEED output from samplers

Summary

Currently the KSampler and KSampler Adv. nodes only output values for:

  • MODEL
  • CONDITIONING+
  • CONDITIONING-
  • LATENT
  • VAE
  • IMAGE

However a common tactic, especially in ComfyUI is to modify these outputs after the initial sampling before handing off to another KSampler. When the seed widget is converted into an input, the internal seed generator on the KSampler is no longer used. Allowing the seed input to come from the output of the last KSampler node would improve the clarity and readability of the node connections between KSampler nodes.

Basic Example

  1. Connect an Efficient Loader node to any KSampler (Efficient) (simple or advanced) node.
  2. Use either an external seed generator or the internal seed generator on the KSampler node to generate a seed to be used to sample the latent image.
  3. Connect the KSampler node outputs to the desired nodes for further processing
  4. Connect the outputs of those nodes to the inputs of a second KSampler (Efficient) (simple or advanced) node.
  5. Connect SEED output of 1st KSampler to the seed input of the 2nd KSampler node.

After this image generation may resume to produce a final image.

Reference Issues

No Response

Feature Request

Can you add ClipTextencodeSDXL and ClipTextencodeSDXLrefiner settings in Eff. loader SDXL? Results is really different
изображение_2023-12-24_005627264

изображение_2023-12-24_005513263
SDXL
Hassaku

CLIP Results Different in Eff Loader Than Other CLIP Nodes

I have some pictures and some more details around my initial investigation here - LucianoCirino#201 (comment)

Right now the way Eff Loader Interprets CLIP text is different from other CLIP Encoder nodes. I'm not sure what's going on but it seems to affect some LORAs especially hard (making dialing in CLIP STR almost impossible), but it affects all images and checkpoints to some degree.

In someone only uses Eff nodes for all clip encoding in theory this might not be a major issue, but anytime nodes are mixed it can create adverse effects. Happy to provide more details if needed or generate some new examples.

I discovered this trying to use the XY Lora Plot scripts, which highlights the issue quite prominently and also makes the results from it unusable (unless you're using an eff loader to always apply your lora clip)

causes Comfyui manager to become unresponding

Upon installation into node Dir. and Booting up Comfyui, Comfyui manager becomes unresponsive and fails to launch. Have taken this node in and out of the directory to duplicate the end results. It's occurred each time I put the node into the custom_node dir.

Error "missing 1 required positional argument: 'control_net_name'" (non-systematic) with the 'HighRes-Fix Script' node

Hello everyone,

From time to time, I encounter an error when using the 'HighRes-Fix Script' node. Sometimes it works, sometimes it doesn't, even while strictly using the same workflow (for instance, through the 'load' button or by dragging an image).

Here's the error:

Error occurred when executing HighRes-Fix Script:

TSC_HighRes_Fix.hires_fix_script() missing 1 required positional argument: 'control_net_name'

  File "C:\ArtificialIntelligence\ComfUI\execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "C:\ArtificialIntelligence\ComfUI\execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "C:\ArtificialIntelligence\ComfUI\execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

And here's the associated workflow:

control_net_error

ComfyUI and Efficiency Nodes are up to date.

Thank you in advance for your help! 👍

noise control - GPU breaking

after the most recent update, when I'm running the noise control script - get this following error if I use w/ cfg_denoiser = true.
doesn't seem to matter if using cpu/gpu/nv

Screenshot 2023-11-07 at 4 37 26 AM

New LCM-LoRA seem to have no effect on image generation speed

I've seen videos of people loading the new LCM-LoRAs greatly reducing the image generation time.

However, loading one of them via the Efficiency Loader node seems to have no effect on my system:

Screenshot 2023-11-15 at 15 21 49

The image generation speed remains identical.

It might well be that, despite what I read, these new LoRAs don't support Apple MPS, but I wanted to flag it just in case it's a problem with how the LoRAs are loaded by the Efficient Loader node.

Thank you.

But this might be an issue :) Eff. Loader: "list indices must be integers or slices, not str"

This is in a brand new Comfy install, set up to try to create a step-by-step guide for installing my Workflow Suite AegisFlow. All works well in an older comfy install.

Full console:
`ERROR:root:Traceback (most recent call last):
File "H:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui-main\efficiency_nodes.py", line 172, in efficientloader
encode_prompts(positive, negative, token_normalization, weight_interpretation, clip, clip_skip,
File "H:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui-main\efficiency_nodes.py", line 73, in encode_prompts
positive_encoded = bnk_adv_encode.AdvancedCLIPTextEncode().encode(clip, positive_prompt, token_normalization, weight_interpretation)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui-main\py\bnk_adv_encode.py", line 312, in encode
embeddings_final, pooled = advanced_encode(clip, text, token_normalization, weight_interpretation, w_max=1.0,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "H:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui-main\py\bnk_adv_encode.py", line 262, in advanced_encode
return advanced_encode_from_tokens(tokenized['l'],
~~~~~~~~~^^^^^
TypeError: list indices must be integers or slices, not str

Prompt executed in 0.03 seconds`

error happened when replicating animatediff&highresfix scripts workflow

screenshot
hi,when i was trying animatediff&highresfix scripts workflow,this error message pops up when the during the sampling process
"local variable orig_maxium_batch_area referenced before assignment",the highres fix scripts works fine on single picture.
also my highres fix scripts doesnt show controlnet options,i tried reinstalling but it didnt work. is there a solution? thx!

Include filename on AnimadeDiff Script node

AnimateDiff Evolved has a filename input on its model which would be super handy to be able to configure in Efficiency.

It'd probably be worth looking at porting that one over to the script node. As a user, I'd be very pleased to be able to tell it where to output its images rather than letting it hardcode itself into the output/ directory.

i.e.: output/%date%/%date%_%model (typically done though the save node)

Denoising in KSampler Advanced node

I'm working on an img2img scenario, and my usual Superman image is my source image:

Superman

One of the differences between the KSampler (left) and the KSampler Advanced (right) in this node suite is that the latter lacks a way to define the denoise level:

Screenshot 2023-12-01 at 16 48 24

The KSampler Advanced node allows you to activate add_noise, but I see no way of controlling the amount of denoising.

Given that it's an "advanced" node, I assumed that denoising depends on the sophisticated Noise Control Script, but I tried it and it seems to be doing nothing to control denoise:

Screenshot 2023-12-01 at 17 16 07

What am I missing something? Thank you.

Efficiency 2.0 - the preview stopped working in all not efficiency nodes.

Re-opened my issue from the original repo.

The issue appears as follows - if Efficiency 2.0 is not loaded in ComfyUi, the preview in all nodes works without any problems. If Efficiency 2.0 is loaded, but its nodes were not launched in the workflow, the preview also works. However, if Ksampler (efficient) was launched once, the preview in other nodes is blocked, while in Ksampler (efficient) the preview works fine. As I understood, the block occurs at the web interface level, as from the same browser I launched the workflow on a remote server ComfyUi (without Efficiency 2.0) and the preview also did not work in this case.

System: Ubuntu 23.04
Browser: Chromium 118.0.5993.88

[BUG] AttributeError: 'BaseModel' object has no attribute 'num_timesteps'

When queuing the following workflow, I receive an error on the KSampler Adv. (Efficent), AttributeError: 'BaseModel' object has no attribute 'num_timesteps'.
Workflow for replication: Bug_Report.json

Expected outcome:
The workflow should queue and generate an xy plot with each model located in the folder with 4 seeds.

See below for the error log:

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: D:\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
got prompt
Requested to load SD1ClipModel
Loading 1 new model
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
----------------------------------------
Efficient Loader Models Cache:
Ckpt:
  [1] AnythingV5V3_v5PrtRE
Vae:
  [1] blessed2.vae
----------------------------------------
XY Plot Script Inputs:
(X) Seeds++ Batch:
    8948454
    8948455
    8948456
    8948457
(Y) Checkpoint:
    ('AnythingV5V3_v5PrtRE.safetensors', None, None)
    ('CounterfeitV30_v30.safetensors', None, None)
    ('FaceBombMix-fp16-no-ema.safetensors', None, None)
    ('MF-AscendanceOfABookworm_V2_T3.1.safetensors', None, None)
    ('MF-EminenceInShadow_V1.ckpt', None, None)
    ('MF-KonoSuba-V1.1-T2.11.ckpt', None, None)
    ('MeinaHentai V5.safetensors', None, None)
    ('MyneFactoryBase V1.0.safetensors', None, None)
    ('SomethingV2_2.safetensors', None, None)
    ('aurora_v10.safetensors', None, None)
    ('bofuri-ep01-gs57751.safetensors', None, None)
    ('calicomix_v75.safetensors', None, None)
    ('cetusMix_v4.safetensors', None, None)
    ('corneos7thHeavenMix_100.safetensors', None, None)
    ('ctd-darkmix.safetensors', None, None)
    ('darkSushi25D25D_v40.safetensors', None, None)
    ('deepboys25D_v30.safetensors', None, None)
    ('detailedprojectv4-fin.safetensors', None, None)
    ('dpep3-chillout.safetensors', None, None)
    ('dreamlike-anime-1.0.safetensors', None, None)
    ('hell5-dpep.safetensors', None, None)
    ('meinaalter_v3.safetensors', None, None)
    ('meinamix_meinaV11.safetensors', None, None)
    ('sakushimixFinished_sakushimixFinal.safetensors', None, None)
    ('sakushimixHentai_v11.safetensors', None, None)
    ('udonNoodleMix_udonNoodleMix.safetensors', None, None)
----------------------------------------
Requested to load SD1ClipModel
Loading 1 new model
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "D:\ComfyUI\execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 2175, in sample_adv
    return super().sample(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 1542, in sample
    process_values(model, refiner_model, add_noise, seed, steps, start_at_step, end_at_step,
  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 1413, in process_values
    samples, images, _, _ = process_latent_image(model, seed, steps, cfg, sampler_name, scheduler, positive, negative,
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 537, in process_latent_image
    samples = KSamplerAdvanced().sample(model, add_noise, seed, steps, cfg, sampler_name, scheduler,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\nodes.py", line 1271, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\nodes.py", line 1207, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 147, in animatediff_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\comfy\sample.py", line 98, in sample
    sampler = comfy.samplers.KSampler(real_model, steps=steps, device=model.load_device, sampler=sampler_name, scheduler=scheduler, denoise=denoise, model_options=model.model_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\py\smZ_cfg_denoiser.py", line 321, in __init__
    set_model_k(self)
  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\py\smZ_cfg_denoiser.py", line 308, in set_model_k
    self.model_denoise = CFGNoisePredictor(self.model) # main change
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\py\smZ_cfg_denoiser.py", line 256, in __init__
    self.inner_model2.num_timesteps = model.num_timesteps
                                      ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Ali\miniconda3\envs\comfy\Lib\site-packages\torch\nn\modules\module.py", line 1695, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'BaseModel' object has no attribute 'num_timesteps'

Prompt executed in 4.54 seconds

If you require anything additional, please let me know!

Platform: Windows
Install method: Manual (Conda)
Python: 3.11
Commit: 8cae155

Efficient Loader Error

Error occurred when executing Efficient Loader:

'NoneType' object has no attribute 'lower'

File "H:\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "H:\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "H:\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "H:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 151, in efficientloader
model, clip = load_lora(lora_params, ckpt_name, my_unique_id, cache=lora_cache, ckpt_cache=ckpt_cache, cache_overwrite=True)
File "H:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 372, in load_lora
lora_model, lora_clip = recursive_load_lora(lora_params, ckpt, clip, id, ckpt_cache, cache_overwrite, folder_paths)
File "H:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 366, in recursive_load_lora
return recursive_load_lora(lora_params[1:], lora_model, lora_clip, id, ckpt_cache, cache_overwrite, folder_paths)
File "H:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 366, in recursive_load_lora
return recursive_load_lora(lora_params[1:], lora_model, lora_clip, id, ckpt_cache, cache_overwrite, folder_paths)
File "H:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 366, in recursive_load_lora
return recursive_load_lora(lora_params[1:], lora_model, lora_clip, id, ckpt_cache, cache_overwrite, folder_paths)
[Previous line repeated 5 more times]
File "H:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 363, in recursive_load_lora
lora_model, lora_clip = comfy.sd.load_lora_for_models(ckpt, clip, comfy.utils.load_torch_file(lora_path), strength_model, strength_clip)
File "H:\ComfyUI\comfy\utils.py", line 12, in load_torch_file
if ckpt.lower().endswith(".safetensors"):

Screenshot 2023-11-18 220541

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.