Giter Site home page Giter Site logo

pydn / comfyui-to-python-extension Goto Github PK

View Code? Open in Web Editor NEW
699.0 5.0 68.0 774 KB

A powerful tool that translates ComfyUI workflows into executable Python code.

License: MIT License

Python 100.00%
comfyui image-generation stable-diffusion ai-art generative-art pytorch

comfyui-to-python-extension's Introduction

ComfyUI-to-Python-Extension

The ComfyUI-to-Python-Extension is a powerful tool that translates ComfyUI workflows into executable Python code. Designed to bridge the gap between ComfyUI's visual interface and Python's programming environment, this script facilitates the seamless transition from design to code execution. Whether you're a data scientist, a software developer, or an AI enthusiast, this tool streamlines the process of implementing ComfyUI workflows in Python.

Convert this:

SDXL UI Example

To this:

import random
import torch
import sys

sys.path.append("../")
from nodes import (
    VAEDecode,
    KSamplerAdvanced,
    EmptyLatentImage,
    SaveImage,
    CheckpointLoaderSimple,
    CLIPTextEncode,
)


def main():
    with torch.inference_mode():
        checkpointloadersimple = CheckpointLoaderSimple()
        checkpointloadersimple_4 = checkpointloadersimple.load_checkpoint(
            ckpt_name="sd_xl_base_1.0.safetensors"
        )

        emptylatentimage = EmptyLatentImage()
        emptylatentimage_5 = emptylatentimage.generate(
            width=1024, height=1024, batch_size=1
        )

        cliptextencode = CLIPTextEncode()
        cliptextencode_6 = cliptextencode.encode(
            text="evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
            clip=checkpointloadersimple_4[1],
        )

        cliptextencode_7 = cliptextencode.encode(
            text="text, watermark", clip=checkpointloadersimple_4[1]
        )

        checkpointloadersimple_12 = checkpointloadersimple.load_checkpoint(
            ckpt_name="sd_xl_refiner_1.0.safetensors"
        )

        cliptextencode_15 = cliptextencode.encode(
            text="evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
            clip=checkpointloadersimple_12[1],
        )

        cliptextencode_16 = cliptextencode.encode(
            text="text, watermark", clip=checkpointloadersimple_12[1]
        )

        ksampleradvanced = KSamplerAdvanced()
        vaedecode = VAEDecode()
        saveimage = SaveImage()

        for q in range(10):
            ksampleradvanced_10 = ksampleradvanced.sample(
                add_noise="enable",
                noise_seed=random.randint(1, 2**64),
                steps=25,
                cfg=8,
                sampler_name="euler",
                scheduler="normal",
                start_at_step=0,
                end_at_step=20,
                return_with_leftover_noise="enable",
                model=checkpointloadersimple_4[0],
                positive=cliptextencode_6[0],
                negative=cliptextencode_7[0],
                latent_image=emptylatentimage_5[0],
            )

            ksampleradvanced_11 = ksampleradvanced.sample(
                add_noise="disable",
                noise_seed=random.randint(1, 2**64),
                steps=25,
                cfg=8,
                sampler_name="euler",
                scheduler="normal",
                start_at_step=20,
                end_at_step=10000,
                return_with_leftover_noise="disable",
                model=checkpointloadersimple_12[0],
                positive=cliptextencode_15[0],
                negative=cliptextencode_16[0],
                latent_image=ksampleradvanced_10[0],
            )

            vaedecode_17 = vaedecode.decode(
                samples=ksampleradvanced_11[0], vae=checkpointloadersimple_12[2]
            )

            saveimage_19 = saveimage.save_images(
                filename_prefix="ComfyUI", images=vaedecode_17[0]
            )


if __name__ == "__main__":
    main()

Potential Use Cases

  • Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow
  • Creating programmatic experiments for various prompt/parameter values
  • Creating large queues for image generation (For example, you could adjust the script to generate 1000 images without clicking ctrl+enter 1000 times)
  • Easily expanding or iterating on your architecture in Python once a foundational workflow is in place in the GUI

V1.0.0 Release Notes

  • Use all the custom nodes!
    • Custom nodes are now supported. If you run into any issues with code execution, first ensure that the each node works as expected in the GUI. If it works in the GUI, but not in the generated script, please submit an issue.

Usage

  1. Navigate to your ComfyUI directory

  2. Clone this repo

    git clone https://github.com/pydn/ComfyUI-to-Python-Extension.git

    After cloning the repo, your ComfyUI directory should look like this:

    /comfy
    /comfy_extras
    /ComfyUI-to-Python-Extension
    /custom_nodes
    /input
    /models
    /output
    /script_examples
    /web
    .gitignore
    LICENSE
    README.md
    comfyui_screenshot.png
    cuda_mollac.py
    execution.py
    extra_model_paths.yaml.example
    folder_paths.py
    latent_preview.py
    main.py
    nodes.py
    requirements.txt
    server.py
    
  3. Navigate to the ComfyUI-to-Python-Extension folder and install requirements

    pip install -r requirements.txt
  4. Launch ComfyUI, click the gear icon over Queue Prompt, then check Enable Dev mode Options. THE SCRIPT WILL NOT WORK IF YOU DO NOT ENABLE THIS OPTION!

Enable Dev Mode Options

  1. Load up your favorite workflows, then click the newly enabled Save (API Format) button under Queue Prompt

  2. Move the downloaded .json workflow file to your ComfyUI/ComfyUI-to-Python-Extension folder

  3. If needed, update the input_file and output_file variables at the bottom of comfyui_to_python.py to match the name of your .json workflow file and desired .py file name. By default, the script will look for a file called workflow_api.json. You can also update the queue_size variable to your desired number of images that you want to generate in a single script execution. By default, the scripts will generate 10 images.

  4. Run the script:

    python comfyui_to_python.py
  5. After running comfyui_to_python.py, a new .py file will be created in the current working directory. If you made no changes, look for workflow_api.py.

  6. Now you can execute the newly created .py file to generate images without launching a server.

comfyui-to-python-extension's People

Contributors

dimtoneff avatar felipemurguia avatar pydn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

comfyui-to-python-extension's Issues

Where is `nodes`?

Hello,

Awesome tool! Thank you for creating it. When I run python3 comfyui_to_python.py, the resulting workflow_api.py includes a mysterious node library that doesn't seem to exist in requirements.txt.

How do I import this library? Feels like its part of a file that is not included in the repo?

image

GPU usage

I'm using a few generated scripts on an EC2 instance of type g4dn.xlarge, with Tesla T4 GPU

When running the workflow from the ComfyUI interface, I see traced in logs:
Device: cuda:0 Tesla T4 : cudaMallocAsync
while if I run the generated python script I see:
Device: cuda:0 Tesla T4 : native

The script produces similar result, but it's a lot slower. I tryied to add --cuda-malloc to the command line, but nothing changes. Any clue?

btw: great extension!

Support for Passing Dynamic Prompts via Command-Line Arguments

Hello! Thanks so much for your work on this!

I'm working with a script generated by the comfyui_to_python.py and am trying to modify the outputted script's functionality to accept dynamic prompts through command-line arguments. The goal is to make the script more flexible by allowing users to specify prompts at runtime, instead of having them hardcoded or predefined within the script (Actually trying to get it to work with LLM agent workflows).

Currently, the script either uses the hard coded prompt or (I assume) requires modification in the source code to change the prompt used in its operations.

Ideally, I would like the ability to pass a prompt directly when executing the script, something like this:
python script_name.py "Custom prompt text here"

This would set the script to use the provided text as the prompt for its execution cycle.

Questions:

  • Is there existing functionality within the script or project that supports this, and I might have overlooked it?
  • If not, could you guide me on how best to implement this feature? Are there any considerations or best practices I should follow to maintain compatibility with the rest of the project?

Attempts:
I've tried modifying the script to parse command-line arguments using the sys.argv list, setting the first argument as the prompt variable. Here's a snippet of my current unsuccessful approach:

...
import sys
# Check for command-line arguments for the prompt
if len(sys.argv) > 1:
    prompt = sys.argv[1]
else:
    prompt = "beautiful scenery nature glass bottle yada yada yada"
...
        cliptextencode = cliptextencode.encode(
            text=prompt,
            clip=get_value_at_index(checkpointloadersimple_4, 1),
        )

I appreciate any guidance or suggestions you can provide. Thank you for your time and assistance.

black.parsing.InvalidInput: Cannot parse: 147:629:

I run the Workflow in the GUI without any issues. When I try and create a python script to execute this same workflow I get the following... Traceback (most recent call last):
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 460, in
ComfyUItoPython(input_file=input_file, output_file=output_file, queue_size=queue_size)
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 425, in init
self.execute()
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 445, in execute
generated_code = code_generator.generate_workflow(load_order, filename=self.output_file, queue_size=self.queue_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 243, in generate_workflow
final_code = self.assemble_python_code(import_statements, special_functions_code, code, queue_size, custom_nodes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\SDXL\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 325, in assemble_python_code
final_code = black.format_str(final_code, mode=black.Mode())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "src\black_init_.py", line 1172, in format_str
File "src\black_init_.py", line 1186, in _format_str_once
File "src\black\parsing.py", line 89, in lib2to3_parse
black.parsing.InvalidInput: Cannot parse: 147:629: facedetailerpipe_75:4 = facedetailerpipe.doit(guide_size=256, guide_size_for=True, max_size=768, seed=random.randint(1, 2**64), steps=20, cfg=8, sampler_name="dpmpp_sde_gpu", scheduler="normal", denoise=0.3, feather=5, noise_mask=True, force_inpaint=False, bbox_threshold=0.5, bbox_dilation=10, bbox_crop_factor=3, sam_detection_hint="center-1", sam_dilation=0, sam_threshold=0.93, sam_bbox_expansion=0, sam_mask_hint_threshold=0.7, sam_mask_hint_use_negative="False", drop_size=10, refiner_ratio=0.2, cycle=1, image=get_value_at_index(ksampler_adv_efficient_37, 5), detailer_pipe=get_value_at_index(basicpipetodetailerpipe_75:3, 0))

StoryMaster.json

Contribute

Hi pydn,
I love what you've done and I'd like to contribute.
Are you open to pull resquests?

Problem with loading models and vae

image
Global Models Cache:
Ckpt:
[1] taureal (ids: None)
Vae:
[1] vae-ft-mse-840000-ema-pruned.vae (ids: None)
Traceback (most recent call last):
File "E:\ComfyUI\workflow_api.py", line 166, in
main()
File "E:\ComfyUI\workflow_api.py", line 137, in main
ksampler_efficient_48 = ksampler_efficient.sample(
File "E:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 693, in sample
globals_cleanup(prompt)
File "E:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 209, in globals_cleanup
id_array = [id for id in tup[-1] if str(id) in prompt.keys()]
File "E:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 209, in
id_array = [id for id in tup[-1] if str(id) in prompt.keys()]
AttributeError: 'NoneType' object has no attribute 'keys'

OS: Windows

A problem with running the script

Hello, I discovered this simple script today and tested it. Some errors occurred during the process. The code is as follows. It is the latest version downloaded.
Regarding the file not being found, there seems to be a problem with this path.

FileNotFoundError: [WinError 2] The system can not find the file specified.: 'J:\ComfyUI\ComfyUI-to-Python-Extension\..\web\extensions\CUP-CLIPBOARD'

Cannot import J:\ComfyUI\custom_nodes\Cup-ClipBoard module for custom nodes: [WinError 2] The system can not find the file specified.: 'J:\ComfyUI\ComfyUI-to-Python-Extension\..\web\extensions\CUP-CLIPBOARD
!! Trying to start the node
Searge-SDXL v4.1 in J:\ComfyUI\custom_nodes\SeargeSDXL
Total VRAM 24564 MB, total RAM 65349 MB
xformers version: 0.0.21
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : native
VAE dtype: torch.bfloat16
WAS Node Suite: BlenderNeko's Advanced CLIP Text Encode found, attempting to enable CLIPTextEncode support.
WAS Node Suite: CLIPTextEncode (BlenderNeko Advanced + NSP) node enabled under WAS Suite/Conditioning menu.
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: ffmpeg_bin_path is not set in J:\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 194 nodes successfully.

    "Success is not just about making money. It's about making a difference." - Unknown

Import times for custom nodes:
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI-Image-Selector
0.0 seconds: J:\ComfyUI\custom_nodes\sdxl_prompt_styler
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb
0.0 seconds: J:\ComfyUI\custom_nodes\stability-ComfyUI-nodes
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_Noise
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_TiledKSampler
0.0 seconds: J:\ComfyUI\custom_nodes\sdxl-recommended-res-calc
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI-post-processing-nodes
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_Cutoff
0.0 seconds: J:\ComfyUI\custom_nodes\cup.py
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI-Bmad-DirtyUndoRedo
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
0.0 seconds: J:\ComfyUI\custom_nodes\masquerade-nodes-comfyui
0.0 seconds (IMPORT FAILED): J:\ComfyUI\custom_nodes\Cup-ClipBoard
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_NestedNodeBuilder
0.0 seconds (IMPORT FAILED): J:\ComfyUI\custom_nodes\AIGODLIKE-COMFYUI-TRANSLATION
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI-OpenPose-Editor
0.0 seconds: J:\ComfyUI\custom_nodes\facedetailer
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI-QualityOfLifeSuit_Omar92
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyMath
0.0 seconds: J:\ComfyUI\custom_nodes\comfy-plasma
0.0 seconds (IMPORT FAILED): J:\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
0.0 seconds: J:\ComfyUI\custom_nodes\comfyui-dream-project
0.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
0.0 seconds: J:\ComfyUI\custom_nodes\comfyui_controlnet_aux
0.0 seconds: J:\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
0.2 seconds: J:\ComfyUI\custom_nodes\SeargeSDXL
0.3 seconds: J:\ComfyUI\custom_nodes\ComfyUI-Manager
0.3 seconds: J:\ComfyUI\custom_nodes\comfy_controlnet_preprocessors
0.4 seconds: J:\ComfyUI\custom_nodes\ComfyUI-Allor
0.7 seconds: J:\ComfyUI\custom_nodes\comfyui-dynamicprompts
0.7 seconds: J:\ComfyUI\custom_nodes\efficiency-nodes-comfyui
1.4 seconds: J:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
1.5 seconds: J:\ComfyUI\custom_nodes\was-node-suite-comfyui
3.0 seconds: J:\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet

Traceback (most recent call last):
File "J:\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 460, in
ComfyUItoPython(input_file=input_file, output_file=output_file, queue_size=queue_size)
File "J:\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 425, in init
self.execute()
File "J:\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 441, in execute
load_order = load_order_determiner.determine_load_order()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "J:\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 120, in determine_load_order
self._load_special_functions_first()
File "J:\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 156, in _load_special_functions_first
class_def = self.node_class_mappingsself.data[key]['class_type']
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'CheckpointLoader|pysssss'

(base) PS J:\ComfyUI\ComfyUI-to-Python-Extension>

"TypeError: 'int' object is not subscriptable" when running script

When I run:
!python comfyui_to_python.py

I get this error:
Traceback (most recent call last):
File "/content/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 460, in
ComfyUItoPython(input_file=input_file, output_file=output_file, queue_size=queue_size)
File "/content/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 425, in init
self.execute()
File "/content/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 441, in execute
load_order = load_order_determiner.determine_load_order()
File "/content/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 120, in determine_load_order
self._load_special_functions_first()
File "/content/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 156, in _load_special_functions_first
class_def = self.node_class_mappingsself.data[key]['class_type']
TypeError: 'int' object is not subscriptable

Any ideas?

how to clean cuda cache after inference

thanks to the great works ,and my question is how to clean the cuda cache after inference
let's say i have the sample code just like the readme.md

import random
import torch
import sys
import torch 
sys.path.append("../")
from nodes import (
    VAEDecode,
    KSamplerAdvanced,
    EmptyLatentImage,
    SaveImage,
    CheckpointLoaderSimple,
    CLIPTextEncode,
)


def main():
    with torch.inference_mode():
        checkpointloadersimple = CheckpointLoaderSimple()
        checkpointloadersimple_4 = checkpointloadersimple.load_checkpoint(
            ckpt_name="sd_xl_base_1.0.safetensors"
        )

        emptylatentimage = EmptyLatentImage()
        emptylatentimage_5 = emptylatentimage.generate(
            width=1024, height=1024, batch_size=1
        )

        cliptextencode = CLIPTextEncode()
        cliptextencode_6 = cliptextencode.encode(
            text="evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
            clip=checkpointloadersimple_4[1],
        )

        cliptextencode_7 = cliptextencode.encode(
            text="text, watermark", clip=checkpointloadersimple_4[1]
        )

        checkpointloadersimple_12 = checkpointloadersimple.load_checkpoint(
            ckpt_name="sd_xl_refiner_1.0.safetensors"
        )

        cliptextencode_15 = cliptextencode.encode(
            text="evening sunset scenery blue sky nature, glass bottle with a galaxy in it",
            clip=checkpointloadersimple_12[1],
        )

        cliptextencode_16 = cliptextencode.encode(
            text="text, watermark", clip=checkpointloadersimple_12[1]
        )

        ksampleradvanced = KSamplerAdvanced()
        vaedecode = VAEDecode()
        saveimage = SaveImage()

        for q in range(10):
            ksampleradvanced_10 = ksampleradvanced.sample(
                add_noise="enable",
                noise_seed=random.randint(1, 2**64),
                steps=25,
                cfg=8,
                sampler_name="euler",
                scheduler="normal",
                start_at_step=0,
                end_at_step=20,
                return_with_leftover_noise="enable",
                model=checkpointloadersimple_4[0],
                positive=cliptextencode_6[0],
                negative=cliptextencode_7[0],
                latent_image=emptylatentimage_5[0],
            )

            ksampleradvanced_11 = ksampleradvanced.sample(
                add_noise="disable",
                noise_seed=random.randint(1, 2**64),
                steps=25,
                cfg=8,
                sampler_name="euler",
                scheduler="normal",
                start_at_step=20,
                end_at_step=10000,
                return_with_leftover_noise="disable",
                model=checkpointloadersimple_12[0],
                positive=cliptextencode_15[0],
                negative=cliptextencode_16[0],
                latent_image=ksampleradvanced_10[0],
            )

            vaedecode_17 = vaedecode.decode(
                samples=ksampleradvanced_11[0], vae=checkpointloadersimple_12[2]
            )

            saveimage_19 = saveimage.save_images(
                filename_prefix="ComfyUI", images=vaedecode_17[0]
            )
           # clean cuda cache 
           torch.cuda.empty_cache() 




if __name__ == "__main__":
    main()

this torch.cuda.empty_cache() didn't work becuase the cache is increasing.I want the model still stay in the cache by the way

Question RE use case

Could this be used to effectively write a custom node (more or less)? For instance, in my AegisFlow workflow, I have a large image saving section that really doesn't need exposed; I'd rather have the user just have in some inputs to fill in and have the rest of the image saving process happen as a single node. I previously used a custom node called "ComponentBuilder" that can do this, but unfortunately when plugins get updated it all breaks, and can't be easily untangled again so I abandoned that approach.

But if I could make my own node, maybe I could control the process a bit better.

ComfyUI-VideoHelperSuite load_video() not supported

When converting this ComfyUI-VideoHelperSuite "Load VIdeo (Path)" to python the result is:

vhs_loadvideopath = NODE_CLASS_MAPPINGS["VHS_LoadVideoPath"]()
vhs_loadvideopath_568 = vhs_loadvideopath.load_video()

Notice no args for the load video function which causes an exception.

TIps tricks or fix?

result with noise

I just used the code in your repo with
image

however, the generated result is filled with noise. I wonder why?
image

ksampler with refiner (fooocus)

  File "C:\Users\Usuario\Documents\Git Clone\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 246, in generate_workflow
    final_code = self.assemble_python_code(import_statements, special_functions_code, code, queue_size, custom_nodes)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Usuario\Documents\Git Clone\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 328, in assemble_python_code
    final_code = black.format_str(final_code, mode=black.Mode())
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "src\black\__init__.py", line 1085, in format_str
  File "src\black\__init__.py", line 1095, in _format_str_once
  File "src\black\parsing.py", line 100, in lib2to3_parse
black.parsing.InvalidInput: Cannot parse: 112:11:               ksampler with refiner (fooocus) = NODE_CLASS_MAPPINGS["KSampler With Refiner (Fooocus)"]()

The extension
ComfyUI_Fooocus_KSampler

image

This is the workflow im using (rename it to .json. Github dont support upload .json)
workflow.txt

A fixed-seed value and specifier are disregarded

While converting a KSampler control that specifies a fixed seed value, this extension's code always creates:

seed=random.randint(1, 2**64)

... regardless of the provided seed value and the "fixed" specifier in the "control_after_generate" field.

This issue comes up while creating a series of images that use a "ConditioningAverage" control to morph between two text descriptions. If the original seed value is manually entered instead of random.randint(1, 2**64), the code works as expected.

Pass `prompt` argument to nodes that define it in their inputs

Hello! Thank you for the useful script.

I noticed that, when converting workflows containing efficiency nodes, the obtained script fails, unlike the original workflow on ComfyUI. The reason seems to be caused by the fact that these nodes take as input a prompt parameter in their API (ref), which in Comfy seems to be a special type of parameter containing the entire JSON workflow as a python dictionary. The efficiency nodes then use such information to perform optimizations and prune unused tensors.

When not passing prompt to these nodes, the default value None always causes a crash down the line. While it might be argued that the behavior is not a problem of this extension, it's also true that not passing prompt to nodes that accept it might lead to undesirable behavior. In this case, even when handling the lack of prompt gracefully, the efficiency nodes would probably not be able to carry out performance optimizations.

It would be great to update this extension to send prompt as input to the nodes that support it in the generated code.

Add a button in omni panel

In the omni panel add a button like ComfyUI Manager button. And it generates and downloads the python script. I'm not sure if this is possible, but the environment is loaded so it doesn't have to look for cuda, rocm or others.

Integrate directly in ComfyUI ?

Hi, I had a change of heart and really enjoy using the ComfyUI core directly as part of my app, but I think this would be much more useful as a "Save Workflow as Python" button instead of a separate script that you have to run. Cheers

Works on GUI but fails in the script. something with Image1 kwargs

I have a super simple workflow that takes 3 images, feeds them into ip adapter and generates an image, then upsacles it.

Works fine on GUI
Here is the original workflow
test.json

It succesfully compiled it into a .py script and then when i run it with python i get this:

`model_type EPS
adm 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
clip unexpected: ['clip_l.transformer.text_model.embeddings.position_ids']
Requested to load SD1ClipModel
Loading 1 new model
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates']
INFO: Clip Vision model loaded from C:\ComfyUI\ComfyUI\models\clip_vision\CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
INFO: IPAdapter model loaded from C:\ComfyUI\ComfyUI\models\ipadapter\ip-adapter-faceid-plusv2_sd15.bin
INFO: LoRA model loaded from C:\ComfyUI\ComfyUI\models\loras\ip-adapter-faceid-plusv2_sd15_lora.safetensors
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\ComfyUI\ComfyUI\models\insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\ComfyUI\ComfyUI\models\insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\ComfyUI\ComfyUI\models\insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\ComfyUI\ComfyUI\models\insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\ComfyUI\ComfyUI\models\insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
INFO: InsightFace model loaded with CPU provider
Traceback (most recent call last):
File "C:\ComfyUI\ComfyUI\ComfyUI-to-Python-Extension\workflow_api.py", line 291, in
main()
File "C:\ComfyUI\ComfyUI\ComfyUI-to-Python-Extension\workflow_api.py", line 210, in main
impactmakeimagebatch_32 = impactmakeimagebatch.doit()
File "C:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\util_nodes.py", line 378, in doit
image1 = kwargs['image1']
KeyError: 'image1'

`C:\ComfyUI\ComfyUI\ComfyUI-to-Python-Extension>``

Run with ComfyUI's python_embeded

When following your instructions, the requirements of this extension gets installed to the system's Python site-packages directory.
This don't works for me as the installed torch there has no CUDA acceleration and the script fails.

e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension>python comfyui_to_python.py
Traceback (most recent call last):
  File "e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 17, in <module>
    from nodes import NODE_CLASS_MAPPINGS
  File "e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\..\nodes.py", line 20, in <module>
    import comfy.diffusers_load
  File "e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\..\comfy\diffusers_load.py", line 4, in <module>
    import comfy.sd
  File "e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\..\comfy\sd.py", line 5, in <module>
    from comfy import model_management
  File "e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\..\comfy\model_management.py", line 108, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\..\comfy\model_management.py", line 76, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Python\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device
    _lazy_init()
  File "C:\Python\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

I think a better way would be installing/running it with the ComfyUI python_embeded.
I've tried installing requirements there:

e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension>..\..\python_embeded\python.exe -s -m pip install -r requirements.txt

But then execution doesn't seem to find the utils module next to the script.

e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension>..\..\python_embeded\python.exe -s comfyui_to_python.py
Traceback (most recent call last):
  File "e:\aistuff\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 14, in <module>
    from utils import import_custom_nodes, find_path, add_comfyui_directory_to_sys_path, add_extra_model_paths, get_value_at_index
ModuleNotFoundError: No module named 'utils'

This fail to generate for controlnet sdxl workflow

Total VRAM 45591 MB, total RAM 64139 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA RTX 6000 Ada Generation : native
Using xformers cross attention
Traceback (most recent call last):
File "/workspace/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 439, in
ComfyUItoPython(input_file=input_file, output_file=output_file, queue_size=queue_size)
File "/workspace/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 404, in init
self.execute()
File "/workspace/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 420, in execute
load_order = load_order_determiner.determine_load_order()
File "/workspace/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 123, in determine_load_order
self._load_special_functions_first()
File "/workspace/ComfyUI/ComfyUI-to-Python-Extension/comfyui_to_python.py", line 159, in _load_special_functions_first
class_def = self.node_class_mappingsself.data[key]['class_type']
TypeError: 'int' object is not subscriptable

I got these workflows from

https://huggingface.co/stabilityai/control-lora/tree/main/comfy-control-LoRA-workflows

Problem on custom node VideoHelperSuite

I'm trying to convert my workflow to code, but when I run to the combine vedio section, this error occurs.
微信截图_20240103165217
My workflow works fine in comfyui but fails in python.

no module named utils

When running the script, I always get this issue. Not sure what utils module the code is trying to load

Traceback (most recent call last):
File "C:...\ComfyUI-to-Python-Extension-main\comfyui_to_python.py", line 14, in
from utils import import_custom_nodes, find_path, add_comfyui_directory_to_sys_path, add_extra_model_paths, get_value_at_index
ModuleNotFoundError: No module named 'utils'

Plugin nodes not working?

Any idea if this is possible? I'd love to make a script utilizing controlnet in the workflow, however running the script generator produces an error:

Traceback (most recent call last):
  File "D:\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 333, in <module>
    main(input, queue_size)
  File "D:\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 324, in main
    load_order = determine_load_order(prompt)
  File "D:\ComfyUI_windows_portable\ComfyUI\ComfyUI-to-Python-Extension\comfyui_to_python.py", line 112, in determine_load_order
    class_def = NODE_CLASS_MAPPINGS[data[key]['class_type']]()
KeyError: 'OpenposePreprocessor'

Missing metadata

Hi very cool project.
I've just tried it today, but when I use the python scripts, I always lose all metadata. (comfyui / sd)
Is there a simple way to keep them for the final image ?

error with wildcard node

Traceback (most recent call last):
File "C:\Users\Administrator\Downloads\ComfyUI\ComfyUI\workflow_api_yay.py", line 186, in
main()
File "C:\Users\Administrator\Downloads\ComfyUI\ComfyUI\workflow_api_yay.py", line 136, in main
impactwildcardencode_74 = impactwildcardencode.doit()
File "C:\Users\Administrator\Downloads\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 2197, in doit
populated = kwargs['populated_text']
KeyError: 'populated_text'

The script generates incorrect code if name of the custom node contains problematic characters

Hi,

I used the program to generate the python code for the following workflow:

Now unfortunately, the name of the custom node is: "WD14Tagger|pysssss"

This generates the following code:

 wd14tagger | pysssss_2 = wd14tagger | pysssss.tag(
                model="wd-v1-4-moat-tagger-v2",
                threshold=0.35,
                character_threshold=0.85,
                exclude_tags="",
                image=get_value_at_index(loadimage_1, 0),
            )

The above code doesn't run because it is treated as an OR operator.

 wd14tagger | pysssss = NODE_CLASS_MAPPINGS["WD14Tagger|pysssss"]()
    ^^^^^^^^^^^^^^^^^^^^
SyntaxError: cannot assign to expression here. Maybe you meant '==' instead of '='?

Problem with Efficient Loader because there are too many properties(?)

First of all, congratulations for this amazing software. I was able to make most of it work, but there is a problem with a particular custom node. So this is the portion I believe is failing, I am attaching the workflow to this post) and the code generated,

... removed for brevity ...
    "class_type": "Eff. Loader SDXL"
  },
  "2": {
    "inputs": {
      "input_mode": "simple",
      "lora_count": 2,
      "lora_name_1": "add-detail-xl.safetensors",
      "lora_wt_1": 0.7000000000000001,
      "model_str_1": 1,
      "clip_str_1": 1,
      "lora_name_2": "xl_more_art-full_v1.safetensors",
      "lora_wt_2": 0.7000000000000001,
      "model_str_2": 1,
      "clip_str_2": 1,
      "lora_name_3": "None",
      "lora_wt_3": 1,
      "model_str_3": 1,
      "clip_str_3": 1,
      "lora_name_4": "None",
      "lora_wt_4": 1,
      "model_str_4": 1,
      "clip_str_4": 1,
      "lora_name_5": "None",
      "lora_wt_5": 1,
      "model_str_5": 1,
... removed for brevity ...

Generates:
        lora_stacker = NODE_CLASS_MAPPINGS["LoRA Stacker"]()
        lora_stacker_2 = lora_stacker.lora_stacker(input_mode="simple", lora_count=2)

        eff_loader_sdxl = NODE_CLASS_MAPPINGS["Eff. Loader SDXL"]()
        ksampler_sdxl_eff = NODE_CLASS_MAPPINGS["KSampler SDXL (Eff.)"]()
        saveimage = SaveImage()

        for q in range(10):
            eff_loader_sdxl_1 = eff_loader_sdxl.efficientloaderSDXL(
                base_ckpt_name="copaxTimelessxlSDXL1_v5.safetensors",
                base_clip_skip=-1,
                refiner_ckpt_name="sdXL_v10RefinerVAEFix.safetensors",
                refiner_clip_skip=-1,
                positive_ascore=6,
                negative_ascore=2,
                vae_name="Baked VAE",
                positive="photorealistic scene of (..removed for brevity..) freckles, realistic, wide angle,  wide shot, 4k, film grain, depth, masterpiece",
                negative="embedding:negativeXL (worst quality:1.5), (low quality:1.5), (normal quality:1.5), lowres, bad anatomy, bad hands, multiple eyebrow, (cropped), extra limb, missing limbs, deformed hands, long neck, long body, (bad hands), signature, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error, painting by bad-artist, asian, long hands, helmet:1.5, cgi, 3d, illustration, blue skin, unnatural skin, close up shot, out of frame ",
                empty_latent_width=1024,
                empty_latent_height=1024,
                batch_size=1,
                lora_stack=get_value_at_index(lora_stacker_2, 0),
            )

workflow_api.json.txt
workflow_api.py.txt

Notice that the number of properties on the lora is huge and only the first 2 properties are added.

Feature Request: Export as node

This would be a killer feature but if you could specify input and output nodes then export this as a python script that is a comyui node.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.