Giter Site home page Giter Site logo

ltdrdata / comfyui-inspire-pack Goto Github PK

View Code? Open in Web Editor NEW
247.0 6.0 32.0 282 KB

This repository offers various extension nodes for ComfyUI. Nodes here have different characteristics compared to those in the ComfyUI Impact Pack. The Impact Pack has become too large now...

License: GNU General Public License v3.0

Python 87.30% JavaScript 12.70%

comfyui-inspire-pack's Introduction

ComfyUI-Inspire-Pack

This repository offers various extension nodes for ComfyUI. Nodes here have different characteristics compared to those in the ComfyUI Impact Pack. The Impact Pack has become too large now...

Notice:

  • V0.69 incompatible with the outdated ComfyUI IPAdapter Plus. (A version dated March 24th or later is required.)
  • V0.64 add sigma_factor to RegionalPrompt... nodes required Impact Pack V4.76 or later.
  • V0.62 support faceid in Regional IPAdapter
  • V0.48 optimized wildcard node. This update requires Impact Pack V4.39.2 or later.
  • V0.13.2 isn't compatible with old ControlNet Auxiliary Preprocessor. If you will use MediaPipeFaceMeshDetectorProvider update to latest version(Sep. 17th).
  • WARN: If you use version 0.12 to 0.12.2 without a GlobalSeed node, your workflow's seed may have been erased. Please update immediately.

Nodes

  • Lora Block Weight - This is a node that provides functionality related to Lora block weight.

    • This provides similar functionality to sd-webui-lora-block-weight
    • Lora Loader (Block Weight): When loading Lora, the block weight vector is applied.
      • In the block vector, you can use numbers, R, A, a, B, and b.
      • R is determined sequentially based on a random seed, while A and B represent the values of the A and B parameters, respectively. a and b are half of the values of A and B, respectively.
    • XY Input: Lora Block Weight: This is a node in the Efficiency Nodes' XY Plot that allows you to use Lora block weight.
      • You must ensure that X and Y connections are made, and dependencies should be connected to the XY Plot.
      • Note: To use this feature, update Efficient Nodes to a version released after September 3rd.
  • SEGS Supports nodes - This is a node that supports ApplyControlNet (SEGS) from the Impact Pack.

    • OpenPose Preprocessor Provider (SEGS): OpenPose preprocessor is applied for the purpose of using OpenPose ControlNet in SEGS.
    • Canny Preprocessor Provider (SEGS): Canny preprocessor is applied for the purpose of using Canny ControlNet in SEGS.
    • DW Preprocessor Provider (SEGS), MiDaS Depth Map Preprocessor Provider (SEGS), LeReS Depth Map Preprocessor Provider (SEGS), MediaPipe FaceMesh Preprocessor Provider (SEGS), HED Preprocessor Provider (SEGS), Fake Scribble Preprocessor (SEGS), AnimeLineArt Preprocessor Provider (SEGS), Manga2Anime LineArt Preprocessor Provider (SEGS), LineArt Preprocessor Provider (SEGS), Color Preprocessor Provider (SEGS), Inpaint Preprocessor Provider (SEGS), Tile Preprocessor Provider (SEGS), MeshGraphormer Depth Map Preprocessor Provider (SEGS)
    • MediaPipeFaceMeshDetectorProvider: This node provides BBOX_DETECTOR and SEGM_DETECTOR that can be used in Impact Pack's Detector using the MediaPipe-FaceMesh Preprocessor of ControlNet Auxiliary Preprocessors.
  • A1111 Compatibility support - These nodes assists in replicating the creation of A1111 in ComfyUI exactly.

    • KSampler (Inspire): ComfyUI uses the CPU for generating random noise, while A1111 uses the GPU. One of the three factors that significantly impact reproducing A1111's results in ComfyUI can be addressed using KSampler (Inspire).
      • Other point #1 : Please make sure you haven't forgotten to include 'embedding:' in the embedding used in the prompt, like 'embedding:easynegative.'
      • Other point #2 : ComfyUI and A1111 have different interpretations of weighting. To align them, you need to use BlenderNeko/Advanced CLIP Text Encode.
    • KSamplerAdvanced (Inspire): Inspire Pack version of KSampler (Advanced).
    • RandomNoise (inspire): Inspire Pack version of RandomNoise.
    • Common Parameters
      • batch_seed_mode determines how seeds are applied to batch latents:
        • comfy: This method applies the noise to batch latents all at once. This is advantageous to prevent duplicate images from being generated due to seed duplication when creating images.
        • incremental: Similar to the A1111 case, this method incrementally increases the seed and applies noise sequentially for each batch. This approach is beneficial for straightforward reproduction using only the seed.
        • variation_strength: In each batch, the variation strength starts from the set variation_strength and increases by xxx.
      • variation_seed and variation_strength - Initial noise generated by the seed is transformed to the shape of variation_seed by variation_strength. If variation_strength is 0, it only relies on the influence of the seed, and if variation_strength is 1.0, it is solely influenced by variation_seed.
        • These parameters are used when you want to maintain the composition of an image generated by the seed but wish to introduce slight changes.
  • Prompt Support - These are nodes for supporting prompt processing.

    • Load Prompts From Dir (Inspire): It sequentially reads prompts files from the specified directory. The output it returns is ZIPPED_PROMPT.
      • Specify the directories located under ComfyUI-Inspire-Pack/prompts/
      • One prompts file can have multiple prompts separated by ---.
      • e.g. prompts/example
    • Load Prompts From File (Inspire): It sequentially reads prompts from the specified file. The output it returns is ZIPPED_PROMPT.
      • Specify the file located under ComfyUI-Inspire-Pack/prompts/
      • e.g. prompts/example/prompt2.txt
    • Unzip Prompt (Inspire): Separate ZIPPED_PROMPT into positive, negative, and name components.
      • positive and negative represent text prompts, while name represents the name of the prompt. When loaded from a file using Load Prompts From File (Inspire), the name corresponds to the file name.
    • Zip Prompt (Inspire): Create ZIPPED_PROMPT from positive, negative, and name_opt.
      • If name_opt is omitted, it will be considered as an empty name.
    • Prompt Extractor (Inspire): This node reads prompt information from the image's metadata. Since it retrieves all the text, you need to directly specify the prompts to be used for positive and negative as indicated in the info.
    • Global Seed (Inspire): This is a node that controls the global seed without a separate connection line. It only controls when the widget's name is 'seed' or 'noise_seed'. Additionally, if 'control_before_generate' is checked, it controls the seed before executing the prompt.
      • Seeds that have been converted into inputs are excluded from the target. If you want to control the seed separately, convert it into an input and control it separately.
    • Global Sampler (Inspire): This node is similar to GlobalSeed and can simultaneously set the sampler_name and scheduler for all nodes in the workflow.
      • It applies only to nodes that have both sampler_name and scheduler, and it won't be effective if GlobalSampler is muted.
      • If some of the sampler_name and scheduler have been converted to input and connected to Primitive node, it will not apply only to the converted widget. The widget that has not been converted to input will still be affected.
    • Bind [ImageList, PromptList] (Inspire): Bind Image list and zipped prompt list to export image, positive, negative, and prompt_label in a list format. If there are more prompts than images, the excess prompts are ignored, and if there are not enough, the remainder is filled with default input based on the images.
    • Wildcard Encode (Inspire): The combination node of ImpactWildcardEncode and BlenderNeko's CLIP Text Encode (Advanced).
      • To use this node, you need both the Impact Pack and the Advanced CLIP Text Encode extensions.
      • This node is identical to ImpactWildcardEncode, but it encodes using CLIP Text Encode (Advanced) instead of the default CLIP Text Encode from ComfyUI for CLIP Text Encode.
      • Requirement: Impact Pack V4.18.6 or above
    • Prompt Builder (Inspire): This node is a convenience node that allows you to easily assemble prompts by selecting categories and presets. To modify the presets, edit the ComfyUI-InspirePack/resources/prompt-builder.yaml file.
    • Seed Explorer (Inspire): This node helps explore seeds by allowing you to adjust the variation seed gradually in a prompt-like form.
      • This feature is designed for utilizing a seed that you like, adding slight variations, and then further modifying from there when exploring.
      • In the seed_prompt, the first seed is considered the initial seed, and the reflection rate is omitted, always defaulting to 1.0.
      • Each prompt is separated by a comma, and from the second seed onwards, it should follow the format seed:strength.
      • Pressing the "Add to prompt" button will append additional_seed:additional_strength to the prompt.
    • Random Generator for List (Inspire): When connecting the list output to the signal input, this node generates random values for all items in the list.
    • Make Basic Pipe (Inspire): This is a node that creates a BASIC_PIPE using Wildcard Encode. The Add select to determines whether the selected item from the Select to... combo will be input as positive wildcard text or negative wildcard text.
    • Remove ControlNet (Inspire), Remove ControlNet [RegionalPrompts] (Inspire): Remove ControlNet from CONDITIONING or REGIONAL_PROMPTS.
      • Remove ControlNet [RegionalPrompts] (Inspire) requires Impact Pack V4.73.1 or above.
  • Regional Nodes - These node simplifies the application of prompts by region.

    • Regional Sampler - These nodes assists in the easy utilization of the regional sampler in the Impact Pack.
      • Regional Prompt Simple (Inspire): This node takes mask and basic_pipe as inputs and simplifies the creation of REGIONAL_PROMPTS.
      • Regional Prompt By Color Mask (Inspire): Similar to Regional Prompt Simple (Inspire), this function accepts a color mask image as input and defines the region using the color value that will be used as the mask, instead of directly receiving the mask.
        • The color value can only be in the form of a hex code like #FFFF00 or a decimal number.
    • Regional Conditioning - These nodes provides assistance for simplifying the use of Conditioning (Set Mask).
      • Regional Conditioning Simple (Inspire)
      • Regional Conditioning By Color Mask (Inspire)
    • Regional IPAdapter - These nodes facilitates the convenient use of the attn_mask feature in ComfyUI IPAdapter Plus custom nodes.
      • To use this node, you need to install the ComfyUI IPAdapter Plus extension.
      • Regional IPAdapter Mask (Inspire), Regional IPAdapter By Color Mask (Inspire)
      • Regional IPAdapter Encoded Mask (Inspire), Regional IPAdapter Encoded By Color Mask (Inspire): accept embeds instead of image
    • Regional Seed Explorer - These nodes restrict the variation through a seed prompt, applying it only to the masked areas.
      • Regional Seed Explorer By Mask (Inspire)
      • Regional Seed Explorer By Color Mask (Inspire)
  • Image Util

    • Load Image Batch From Dir (Inspire): This is almost same as LoadImagesFromDirectory of ComfyUI-Advanced-Controlnet. This is just a modified version. Just note that this node forcibly normalizes the size of the loaded image to match the size of the first image, even if they are not the same size, to create a batch image.
    • Load Image List From Dir (Inspire): This is almost same as Load Image Batch From Dir (Inspire). However, note that this node loads data in a list format, not as a batch, so it returns images at their original size without normalizing the size.
    • Load Image (Inspire): This node is similar to LoadImage, but the loaded image information is stored in the workflow. The image itself is stored in the workflow, making it easier to reproduce image generation on other computers.
    • Change Image Batch Size (Inspire): Change Image Batch Size
      • simple: if the batch_size is larger than the batch size of the input image, the last frame will be duplicated. If it is smaller, it will be simply cropped.
    • Change Latent Batch Size (Inspire): Change Latent Batch Size
    • ImageBatchSplitter //Inspire, LatentBatchSplitter //Inspire: The script divides a batch of images/latents into individual images/latents, each with a quantity equal to the specified split_count. An additional output slot is added for each split_count. If the number of images/latents exceeds the split_count, the remaining ones are returned as the "remained" output.
    • Color Map To Masks (Inspire): From the color_map, it extracts the top max_count number of colors and creates masks. min_pixels represents the minimum number of pixels for each color.
    • Select Nth Mask (Inspire): Extracts the nth mask from the mask batch.
  • KSampler Progress - In KSampler, the sampling process generates latent batches. By using Video Combine node from ComfyUI-VideoHelperSuite, you can create a video from the progress.

  • Backend Cache - Nodes for storing arbitrary data from the backend in a cache and sharing it across multiple workflows.

    • Cache Backend Data (Inspire): Stores any backend data in the cache using a string key. Tags are for quick reference.
    • Retrieve Backend Data (Inspire): Retrieves cached backend data using a string key.
    • Remove Backend Data (Inspire): Removes cached backend data.
      • Deletion in this node only removes it from the cache managed by Inspire, and if it's still in use elsewhere, it won't be completely removed from memory.
      • signal_opt is used to control the order of execution for this node; it will still run without a signal_opt input.
      • When using '*' as the key, it clears all data.
    • Show Cached Info (Inspire): Displays information about cached data.
      • Default tag cache size is 5. You can edit the default size of each tag in cache_settings.json.
      • Runtime tag cache size can be modified on the Show Cached Info (Inspire) node. For example: ckpt: 10.
    • Cache Backend Data [NumberKey] (Inspire), Retrieve Backend Data [NumberKey] (Inspire), Remove Backend Data [NumberKey] (Inspire): These nodes are provided for convenience in the automation process, allowing the use of numbers as keys.
    • Cache Backend Data List (Inspire), Cache Backend Data List [NumberKey] (Inspire): This node allows list input for backend cache. Conversely, nodes like Cache Backend Data [NumberKey] (Inspire) that do not accept list input will attempt to cache redundantly and overwrite existing data if provided with a list input. Therefore, it is necessary to use a unique key for each element to prevent this. This node caches the combined list. When retrieving cached backend data through this node, the output is in the form of a list.
    • Shared Checkpoint Loader (Inspire): When loading a checkpoint through this loader, it is automatically cached in the backend cache. Additionally, if it is already cached, it retrieves it from the cache instead of loading it anew.
      • When key_opt is empty, the ckpt_name is set as the cache key. The cache key output can be used for deletion purposes with Remove Back End.
      • This node resolves the issue of reloading checkpoints during workflow switching.
    • Stable Cascade Checkpoint Loader (Inspire): This node provides a feature that allows you to load the stage_b and stage_c checkpoints of Stable Cascade at once, and it also provides a backend caching feature, optionally.
  • Conditioning - Nodes for conditionings

    • Concat Conditionings with Multiplier (Inspire): Concatenating an arbitrary number of Conditionings while applying a multiplier for each Conditioning. The multiplier depends on comfy_PoP, so comfy_PoP must be installed.
  • Models - Nodes for models

    • IPAdapter Model Helper (Inspire): This provides presets that allow for easy loading of the IPAdapter related models. However, it is essential for the model's name to be accurate.
      • You can download the appropriate model through ComfyUI-Manager.
  • Util - Utilities

    • Float Range (Inspire): Create a float list that increases the value by step from start to stop. A list as large as the maximum limit is created, and when ensure_end is enabled, the last value of the list becomes the stop value.
    • ToIPAdapterPipe (Inspire), FromIPAdapterPipe (Inspire): These nodes assists in conveniently using the bundled ipadapter_model, clip_vision, and model required for applying IPAdapter.
    • List Counter (Inspire): When each item in the list traverses through this node, it increments a counter by one, generating an integer value.

Credits

ComfyUI/ComfyUI - A powerful and modular stable diffusion GUI.

ComfyUI/sd-webui-lora-block-weight - The original idea for LoraBlockWeight came from here, and it is based on the syntax of this extension.

jags111/efficiency-nodes-comfyui - The XY Input provided by the Inspire Pack supports the XY Plot of this node.

Fannovel16/comfyui_controlnet_aux - The wrapper for the controlnet preprocessor in the Inspire Pack depends on these nodes.

Kosinkadink/ComfyUI-Advanced-Controlnet - Load Images From Dir (Inspire) code is came from here.

Trung0246/ComfyUI-0246 - Nice bypass hack!

cubiq/ComfyUI_IPAdapter_plus - IPAdapter related nodes depend on this extension.

comfyui-inspire-pack's People

Contributors

ech3lon24 avatar ltdrdata avatar narukaze132 avatar phen-ro avatar scottnealon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

comfyui-inspire-pack's Issues

Feature request "Color Preprocessor Provider (SEGS)"

Hello,

First, thank you for your incredible work!

I'm encountering an issue with the facedetailer adding color to my monochrome images. Is there a possibility of adding a "Color Preprocessor Provider (SEGS)" feature?

Your consideration of this request would be greatly appreciated.

Thank you,

how to get similar result like "effective block analyzer" in sd-webui by using lora block weight

I used the xyplot for lora block weight and trying to get the similar result like "effective block analyzer" in sd-webui but never worked. I guess the xyplot mode: diff or diff+heatmap is for this function. When I switched to diff mode and changed A and B in different value, it just give me the same result as simple mode. No comparison images and diff images, no error message either. Do I need to do more to get the diff out? thank you.

Inpainting preprocessor?

Hi I wanted to try SEGS for inpainting workflows, but it looks like there's no preprocessor for inpainting controlnet, is it possible to add? Thank you!

Value Sender not longer working

Please help! The Value Sender node does not work after the new update.
I noticed a new methot to input a value, but only reroute would connect.

Wich kind of input should i put into?

It´s maybe a bug?

Detailer Hands

Just watched your video on hands, but am hitting an error.

!!! Exception during processing !!!
Traceback (most recent call last):
File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\stable_diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 1703, in doit
DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for, max_size, seed, steps, cfg,
File "D:\stable_diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 609, in do_detail
enhanced_pil = core.enhance_detail(cropped_image, model, clip, vae, guide_size, guide_size_for_bbox, max_size,
File "D:\stable_diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 227, in enhance_detail
positive = control_net_wrapper.apply(positive, upscaled_image)
File "D:\stable_diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 1268, in apply
image = self.preprocessor.apply(image)
File "D:\stable_diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\segs_support.py", line 37, in apply
return obj.estimate_pose(image, detect_hand, detect_body, detect_face)[0]
KeyError: 0

Image refiner may need some refactoring

Hey, I was messing around with image refiner last night and I noticed that it was encountering a few errors for example see exhibit 1 below and also noticed that after fixing it I encountered an issue of a missing function from comfyui's main model management set I don't think this is something related to my local install but it is possible will

exhibit 1
Traceback (most recent call last):
File "e:\AI\Stable-Diffusion\ComfyUI\venv\Lib\site-packages\aiohttp\web_protocol.py", line 433, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\Stable-Diffusion\ComfyUI\venv\Lib\site-packages\aiohttp\web_app.py", line 504, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\Stable-Diffusion\ComfyUI\venv\Lib\site-packages\aiohttp\web_middlewares.py", line 117, in impl
return await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\Stable-Diffusion\ComfyUI\server.py", line 46, in cache_control
response: web.Response = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Stable-Diffusion\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\custom_server.py", line 69, in imagerefiner_generate
result = ir.generate(base_pil.convert('RGB'), mask_pil, prompt_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Stable-Diffusion\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 174, in generate
input_data_all = prepare_input(class_def, merged_pil, mask_pil, prompt_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Stable-Diffusion\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 94, in prepare_input
model, clip, vae = load_checkpoint(v['checkpoint'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Stable-Diffusion\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 38, in load_checkpoint
model, clip, vae, _ = comfy_nodes.CheckpointLoaderSimple().load_checkpoint(ckpt_name)
^^^^^^^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected 4, got 3)

exhibit B
Traceback (most recent call last):
File "e:\AI\Stable-Diffusion\ComfyUI\venv\Lib\site-packages\aiohttp\web_protocol.py", line 433, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\Stable-Diffusion\ComfyUI\venv\Lib\site-packages\aiohttp\web_app.py", line 504, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\Stable-Diffusion\ComfyUI\venv\Lib\site-packages\aiohttp\web_middlewares.py", line 117, in impl
return await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\Stable-Diffusion\ComfyUI\server.py", line 46, in cache_control
response: web.Response = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Stable-Diffusion\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\custom_server.py", line 69, in imagerefiner_generate
result = ir.generate(base_pil.convert('RGB'), mask_pil, prompt_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\Stable-Diffusion\ComfyUI\custom_nodes\ComfyUI-Workflow-Component\image_refiner\imagerefiner.py", line 177, in generate
comfy.model_management.unload_model()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'comfy.model_management' has no attribute 'unload_model'

Incompatible with Nested Nodes

Just letting you (and others) know that this node conflicts with the Nested Nodes. I don't know enough about coding to know if this issue is fixable or not.

When using MediaPipeFaceMeshDetectorProvider, it will cause 'FaceDetailerPipe' to occur an error

ComfyUI stdout: Traceback (most recent call last):
ComfyUI stdout: File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 152, in recursive_execute
ComfyUI stdout: output_data, output_ui = get_output_data(obj, input_data_all)
ComfyUI stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ComfyUI stdout: File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 82, in get_output_data
ComfyUI stdout: return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
ComfyUI stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ComfyUI stdout: File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\execution.py", line 75, in map_node_over_list
ComfyUI stdout: results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
ComfyUI stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ComfyUI stdout: File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 975, in doit
ComfyUI stdout: enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face(
ComfyUI stdout: ^^^^^^^^^^^^^^^^^^^^^^^^^^
ComfyUI stdout: File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 392, in enhance_face
ComfyUI stdout: DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for_bbox, max_size, seed, steps, cfg,
ComfyUI stdout: File "D:\StableSwarmUI\dlbackend\comfy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 217, in do_detail
ComfyUI stdout: image_pil.paste(enhanced_pil, (seg.crop_region[0], seg.crop_region[1]), mask_pil)
ComfyUI stdout: File "D:\StableSwarmUI\dlbackend\comfy\python_embeded\Lib\site-packages\PIL\Image.py", line 1713, in paste
ComfyUI stdout: self.im.paste(im, box, mask.im)
ComfyUI stdout: ValueError: images do not match

屏幕截图 2023-10-08 160955
Tried updating the nodes used. Switching the version of pillow. Changing FaceDetailer input images to different sizes.
Impact Pack version:4.12
img2texture 1.1.0
Pillow 10.0.1 & 0.9.5

Possible issue with Image Loader

Not sure if this is an actual bug but is it possible Inspire Pack Image Loader places heavier resource requirements on Comfyui relative to the vanilla image loader?

A couple of times in the last week or so when I've loaded images via Inspire Pack image loader Comfyui starts to respond slowly and eventually becomes non-responsive. When I check the logs I notice large amounts of encoded data. Initially i thought it was the size of images I'd placed in CLIPspace. However, last night I seemed to fix the issue by deleting the Inspire Image Loader and replacing it with vanilla (same images though). After reverting to Vanilla Loader I also went into log, and cleared it in case Comfyui is allocating background resources to maintaining the log.

Variation seed question

not an issue (sorry, not sure where to post this as it's more a question ) :
how do we check variation seed?

i saw your tutorial on variation seed and the other one for explorting variation seeds.... but what if we want to find the exact seed and variation seed that was used prior? if i run a variation seed with the float, it'll run through images as expected... but it's generating multiple images each time. is there a way to run JUST that seed/variation?
i'm guessing through seed explorer and doing the calculations to reach it? is there something built into inspire pack as a node that achieves this ?

love all the work you do, btw. really awesome stuff. and your web tutorials! i saw folks mention the lack of voice, but with how much you do and churn these out... it totally makes sense you are getting out what you can and it's greatly appreciated. insanely detailed and thorough!

GlobalSeed: seed value taken by the Save Image node for the filename is incorrect

When randomization is turned on, together with control_before_generate, then when using GlobalSeed's value in Save Image node, the value prior to randomization is used for the filename.

Instead it should be the randomized value.

Here is an example workflow that demonstrates the issue:

1_00001_

Here, the seed value prior to randomization was "1", and it is used for the filename "1_00001_.png". However the randomized seed is 1054289837613396.

Error occurred when executing ApplyRegionalIPAdapters //Inspire:

Error occurred when executing ApplyRegionalIPAdapters //Inspire:

Error(s) in loading state_dict for ImageProjModel:
size mismatch for proj.weight: copying a param with shape torch.Size([8192, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 1024]).

dood your image load from dir node is trying to load txt files cmon...

fix this ,its easy and it is logical error on your side

`def load_images(self, directory: str, image_load_cap: int = 0, start_index: int = 0):
if not os.path.isdir(directory):
raise FileNotFoundError(f"Directory '{directory} cannot be found.'")
dir_files = os.listdir(directory)
if len(dir_files) == 0:
raise FileNotFoundError(f"No files in directory '{directory}'.")

# Filter files by extension
valid_extensions = ['.jpg', '.jpeg', '.png']
dir_files = [f for f in dir_files if any(f.lower().endswith(ext) for ext in valid_extensions)]

dir_files = sorted(dir_files)
dir_files = [os.path.join(directory, x) for x in dir_files]

# start at start_index
dir_files = dir_files[start_index:]`

Help debugging SDXL Region Sampler

workflow(1)

  1. Why doesn't this region sampler work?
  2. What are the best practices for splitting up the prompt between base and each region? What goes in the base? How many steps to run it for?

Wilcards do not populate

Wildcards do not load in the "Wilcard Encode (inspire)" of the Inspire Pack when I select "populate"

but they work on the "ImpactWilcardEncode" of the impact pack

this is what I get from the cmd:


got prompt
[ERROR] An error occurred during the on_prompt_handler processing
Traceback (most recent call last):
File "D:\SD\ComfyUI_windows_portable\ComfyUI\server.py", line 633, in trigger_on_prompt
json_data = handler(json_data)
^^^^^^^^^^^^^^^^^^
File "D:\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\inspire_server.py", line 203, in onprompt
populate_wildcards(json_data)
File "D:\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\inspire_server.py", line 181, in populate_wildcards
inputs['populated_text'] = wildcard_process(text=inputs['wildcard_text'], seed=int(inputs['seed']))
^^^^^^^^^^^^^^^^^^^
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'list'
CLIP:
Requested to load BaseModel


Can not install it.

Loading: ComfyUI-Manager (V0.26.2)

ComfyUI Revision: 1269 [5ac96897]

[WARN] ComfyUI-Manager: Your ComfyUI version is outdated. Please update to the latest version.

ComfyUI-Manager: Copy .js from '/home/h3c/Documents/ComfyUI/custom_nodes/ComfyUI-Manager/js/comfyui-manager.js' to '/home/h3c/Documents/ComfyUI/web/extensions/comfyui-manager'

Loading: ComfyUI-Inspire-Pack (V0.4)

Traceback (most recent call last):
File "/home/h3c/Documents/ComfyUI/nodes.py", line 1647, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/h3c/Documents/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack/init.py", line 22, in
imported_module = importlib.import_module(".{}".format(module_name), name)
File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/h3c/Documents/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack/lora_block_weight.py", line 3, in
import comfy.lora
ModuleNotFoundError: No module named 'comfy.lora'

Cannot import /home/h3c/Documents/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack module for custom nodes: No module named 'comfy.lora'

Loading: ComfyUI-Impact-Pack (V3.24.3)

[WARN] ComfyUI-Impact-Pack: Your ComfyUI version is outdated. Please update to the latest version.

Loading: ComfyUI-Impact-Pack (Subpack: V0.2)

(pysssss:CustomScripts) [warning] it looks like you're running an old version of ComfyUI that requires manual setup of web files, it is recommended you update your installation.
Registered sys.path: ['/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/init.py', '/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_midas_repo', '/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_pycocotools', '/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_mmpkg', '/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_oneformer', '/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/custom_detectron2', '/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src/controlnet_aux', '/home/h3c/Documents/ComfyUI/custom_nodes/comfyui_controlnet_aux/src', '/home/h3c/Documents/ComfyUI/comfy', '/home/h3c/.local/lib/python3.10/site-packages/git/ext/gitdb', '/home/h3c/Documents/ComfyUI', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/home/h3c/.local/lib/python3.10/site-packages', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/tmp/tmpc2nncunf', '../..', '/home/h3c/Documents/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules', '/home/h3c/Documents/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/subpack']

batch_seed_mode with the face_detailer from the impact pack?

Screenshot 2023-12-22 at 6 01 34 PM

Is it possible to get similar functionality in the new "batch_seed_mode" in the face_detailer (from the impact pack) to create consistent faces? Right now the face detailer is generating significantly different faces in a batch even if the underlying images in the batch are very similar to each other.

Greatly appreciate the work!

Mask Seed Exploration Not Respecting Mask

I am running a simple workflow based on your example, but have added in a variable to help keep the seeds aligned. I am using a simple mask to find the face and then using the new regional seed to shift that. However, this does not appear to be working as expected. Just sanity checking what I have here. Workflow attachment in next comments.

X/Y LoRA Block Weight Error

Hitting a wall on a standard X/Y plot. Would appreciate some help trying to get past whatever this is.

Ckpt:
  [1] sd_xl_base_1.0
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\stable_diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\lora_block_weight.py", line 260, in doit
    model_lora, clip_lora, populated_vector = LoraLoaderBlockWeight.load_lora_for_models(model, clip, lora, strength_model, strength_clip, inverse, seed, A, B, block_vector)
  File "D:\stable_diffusion\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\lora_block_weight.py", line 151, in load_lora_for_models
    raise ValueError(f"[LoraLoaderBlockWeight] invalid block_vector '{block_vector}'")
ValueError: [LoraLoaderBlockWeight] invalid block_vector 'A,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
B2'

Prompt executed in 24.10 seconds

Error occurred when executing LoadImagesFromDir //Inspire:

`Directory 'F:\ai-ref\Openpose\poses cannot be found.'

File "f:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "f:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "f:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\image_util.py", line 31, in load_images
raise FileNotFoundError(f"Directory '{directory} cannot be found.'")`

Moving the folder to the ComfyUI directory and writing the path in the format you wrote in #17 does not help

GlobalSampler //Inspire?

I use the GlobalSeed //Inspire node a lot and I love it!
Would it be possible to have the same for sampler_name/scheduler?

My use case is that I have a larger workflow with many samplers (~14) and sometimes want to test it with different samplers. Extracting as primitive works, but makes the workflow messier, especially since primitives have (had?) race condition bugs with reroutes.

GlobalSeed solves this beautifully for the seed, it would be great to have the same for the sampler/scheduler. Like with the seed, converting sampler_name/scheduler to input should prevent the replacement.

Thanks!

Strange Error even though no using Inspire Pack nodes in API workflow.

Hi, I am running an XY plot workflow via API, although the workflow contains no Inspire Pack nodes, I get the following message when the script starts (the script does continue and complete without problems though)

got prompt
[ERROR] An error occurred during the on_prompt_handler processing
Traceback (most recent call last):
  File "/home/jh/ai/ComfyUI/server.py", line 625, in trigger_on_prompt
    json_data = handler(json_data)
  File "/home/jh/ai/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack/inspire_server.py", line 122, in onprompt
    is_changed = prompt_seed_update(json_data)
  File "/home/jh/ai/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack/inspire_server.py", line 63, in prompt_seed_update
    seed_widget_map = json_data['extra_data']['extra_pnginfo']['workflow']['seed_widgets']
KeyError: 'extra_data'

Load Images From Directory

Thought I would try and use this node to read embeds for IPadapter. Not sure where to put these, so I wanted to try this. However, I can't seem to get the path to work correctly in windows. I get this error:

Error occurred when executing LoadImagesFromDir //Inspire:

object of type 'NoneType' has no len()

File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\stable_diffusion\ComfyUI\ComfyUI\execution.py", line 96, in get_output_data
output_is_list = [False] * len(results[0])

Lora Block Weights

When doing the testing, should the first block always be 1? Is that representing the base?

B1:A,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
B2:0,A,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
B3:0,0,A,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
B4:0,0,0,A,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0

Feature request Text from Directory

Can you create a text import function along the lines of Image from Directory that allows us to load multiple content from text files in a folder?

AttributeError: type object 'ImpactWildcardEncode' has no attribute 'get_wildcard_list'

when starting comfyui I get this after all nodes are loaded:

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: D:\Ai_art\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
[ERROR] An error occurred while retrieving information for the 'WildcardEncode //Inspire' node.
Traceback (most recent call last):
  File "D:\Ai_art\ComfyUI_windows_portable\ComfyUI\server.py", line 417, in get_object_info
    out[x] = node_info(x)
  File "D:\Ai_art\ComfyUI_windows_portable\ComfyUI\server.py", line 395, in node_info
    info['input'] = obj_class.INPUT_TYPES()
  File "D:\Ai_art\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\prompt_support.py", line 318, in INPUT_TYPES
    wildcards = nodes.NODE_CLASS_MAPPINGS['ImpactWildcardEncode'].get_wildcard_list()
AttributeError: type object 'ImpactWildcardEncode' has no attribute 'get_wildcard_list'

wildcardEncode (inspire pack) seed can't work for input

wildcardEncode (inspire pack) seed can't work for input
image

He is either random or fixed before conversion, and does not change according to the node that flows in.

by the way Global seed (inspire pack) seems not work good at control_before_generate,.

Error occurred when executing LoraLoaderBlockWeight

hi. i get the following error when using the LoRA Loader Block weight:

Error occurred when executing LoraLoaderBlockWeight //Inspire:

Error while deserializing header: HeaderTooLarge

File "/home/user/ai/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get output data(obj, input data all)
File "/home/user/al/ComfyUI/execution.py", line 82, in get output_data
return_ values = map node_ over list (obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "home/user/al/ComfyUl/execution.py", line 75, in map node over list
results.append (getattr (obj, func) (**slice_dict (input_data_all, i)))
File " /home/user/ai/ComfyUI/custom nodes/ComfyUI-Inspire-Pack/lora block weight.py", line 250, in doit
lora = comfy.utils. load torch file (lora path, safe load=True)
File "/home/user/ai/ComfyUI/comfy/utils.py", line 11, in load_torch_file
sd = safetensors. torch. load_file (ckpt, device=device.type)
File "/home/user/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/safetensors/torch.py",line259,in
load file
with safe open(filename, framework="pt", device=device) as f:

Shows as missing node after installed

I installed this node both manually and automatically using the "install missing custom nodes" feature. But it still shows the node as missing. And shows up in the missing custom node list. I've tried a fresh install of ComfyUI and still same issue. Any idea how to resolve?

I'm really frustrated. . .

The mediapipe face detector of SEGs was working perfectly for me as of the 10th of September. Then both you and Fannovel16 updated you packs, now I can't get it to detect the face part correctly no matter what I do. I tried to revert to an earlier commit, but unfortunately, evidently I'm not smart enough to make that work. Can you look into weather this can be made to work as it was when you first made it? Thanks!

FR: Prompt Support CVS

The current prompt reading requires a file per prompt, which is painful when I have a set of 1000 prompts already in a CSV file. Can we have CSV as another option here, and perhaps a use of something like comply for prompt weighting and negative use? Getting all of the prompts into one file is the primary pain point I am trying to resolve.

Error - AttributeError: 'Logger' object has no attribute 'encoding'

Running windows 10.

The module at the custom_nodes/ComfyUI-Impact-Pack/impact_subpack path appears to be incomplete.
Recommended to delete the path and restart ComfyUI.
If the issue persists, please report it to https://github.com/ltdrdata/ComfyUI-Impact-Pack/issues.


Traceback (most recent call last):
File "D:\Projects\comphy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack_init_.py", line 408, in
import impact.subpack_nodes
File "D:\Projects\comphy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack\impact\subpack_nodes.py", line 4, in
import impact.subcore as subcore
File "D:\Projects\comphy\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack\impact\subcore.py", line 10, in
from ultralytics import YOLO
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics_init_.py", line 5, in
from ultralytics.models import RTDETR, SAM, YOLO
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics\models_init_.py", line 3, in
from .rtdetr import RTDETR
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics\models\rtdetr_init_.py", line 3, in
from .model import RTDETR
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics\models\rtdetr\model.py", line 10, in
from ultralytics.engine.model import Model
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics\engine\model.py", line 8, in
from ultralytics.cfg import TASK2DATA, get_cfg, get_save_dir
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics\cfg_init_.py", line 10, in
from ultralytics.utils import (ASSETS, DEFAULT_CFG, DEFAULT_CFG_DICT, DEFAULT_CFG_PATH, LOGGER, RANK, ROOT, RUNS_DIR,
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics\utils_init_.py", line 265, in
LOGGER = set_logging(LOGGING_NAME, verbose=VERBOSE) # define globally (used in train.py, val.py, predict.py, etc.)
File "D:\Projects\comphy\ComfyUI\env\lib\site-packages\ultralytics\utils_init_.py", line 233, in set_logging
if WINDOWS and sys.stdout.encoding != 'utf-8':
AttributeError: 'Logger' object has no attribute 'encoding'

Lora Block Weight Node not working

I followed the youtube window example (https://youtu.be/X9v0xQrInn8). What I found was the Block Weight Node doesn’t seem to be loading the Lora and I ended up with the same 3 images recurring no matter which blocks we being activated in either scenario NONE, ALL, MIDD or FULL TEST 17.

E.g.:
Model: Henmix_Real_v40
Lora: edgKM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.