Giter Site home page Giter Site logo

fooocus_nodes's Issues

fooocus style is not displayed

Dear creator, thank you very much for integrating fooocus into comfyUI. My bro and I want to test the performance of the plugin, but we all encounter the same problem, that is, the style selector does not display the style, and an error will be reported when running, indicating that I need to select the style. We noticed that easy-use also has the same style selector. All the people who installed this node will encounter the same problem, so here to tell you, I hope there is a way to solve this problem, thanks for your hard work, thank you very much!
{03ADEF7A-7F51-82A8-ECBB-A260661EAF7C}

Error occurred when executing Fooocus PreKSampler: 'ModelPatcher' object has no attribute 'current_loaded_device'

Hello, first of all, I wanted to thank you for the excellent work you did; it has been very helpful to me. However, since yesterday, the nodes haven't been working for me. I even reinstalled ComfyUI, but it didn't help.

Error occurred when executing Fooocus PreKSampler:

'ModelPatcher' object has no attribute 'current_loaded_device'

File "C:\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\StabilityMatrix\Data\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\fooocusNodes.py", line 423, in fooocus_preKSampler
pipeline.refresh_everything(
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\default_pipeline.py", line 234, in refresh_everything
refresh_base_model(base_model_name)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\default_pipeline.py", line 70, in refresh_base_model
model_base = core.load_model(filename)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\core.py", line 147, in load_model
unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=folder_paths.get_folder_paths("embeddings"))
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\sd.py", line 441, in load_checkpoint_guess_config
model = load_checkpoint_guess_config_without_cache(ckpt_path, output_vae, output_clip, output_clipvision, embedding_directory, output_model)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\sd.py", line 501, in load_checkpoint_guess_config_without_cache
model_management.load_model_gpu(model_patcher)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\comfy\model_management.py", line 474, in load_model_gpu
return load_models_gpu([model])
File "C:\StabilityMatrix\Data\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\patch.py", line 447, in patched_load_models_gpu
y = comfy.model_management.load_models_gpu_origin(*args, **kwargs)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\comfy\model_management.py", line 439, in load_models_gpu
total_memory_required[loaded_model.device] = total_memory_required.get(loaded_model.device, 0) + loaded_model.model_memory_required(loaded_model.device)
File "C:\StabilityMatrix\Data\Packages\ComfyUI\comfy\model_management.py", line 277, in model_memory_required
if device == self.model.current_loaded_device():

first run is succesful, second run crashes

Here is the terminal logs:

[AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (128) greater than context_length 16.
[AnimateDiffEvo] - INFO - Using motion module mm-Stabilized_high.pth:v1.
17%|█▋ | 3/18 [03:05<15:27, 61.84s/it]got prompt
172.17.0.1 - - [28/May/2024 03:30:47] "GET /assets/index-BcHurkhF.css HTTP/1.1" 304 -
172.17.0.1 - - [28/May/2024 03:30:48] "GET /assets/index-B7I_54js.js HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:30:49] "GET /api/settings HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:31:41] "GET / HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:31:42] "GET /assets/index-BcHurkhF.css HTTP/1.1" 304 -
172.17.0.1 - - [28/May/2024 03:31:42] "GET /assets/index-B7I_54js.js HTTP/1.1" 304 -
172.17.0.1 - - [28/May/2024 03:31:42] "GET /api/settings HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:31:42] "GET /api/projects HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:31:53] "GET /api/projects HTTP/1.1" 200 -
39%|███▉ | 7/18 [07:15<11:26, 62.39s/it]/app/server/projects/stop-wars/comfyui/web/comfyui_index.html
FETCH DATA from: /app/server/projects/stop-wars/comfyui/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
FETCH DATA from: /app/server/projects/stop-wars/comfyui/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
[ERROR] An error occurred while retrieving information for the 'Fooocus ultralyticsDetectorPipe' node.
Traceback (most recent call last):
File "/app/server/projects/stop-wars/comfyui/server.py", line 415, in get_object_info
out[x] = node_info(x)
File "/app/server/projects/stop-wars/comfyui/server.py", line 393, in node_info
info['input'] = obj_class.INPUT_TYPES()
File "/app/server/projects/stop-wars/comfyui/custom_nodes/Fooocus_Nodes/py/fooocusNodes.py", line 1368, in INPUT_TYPES
bboxs = ["bbox/" + x for x in folder_paths.get_filename_list("ultralytics_bbox")]
File "/app/server/projects/stop-wars/comfyui/folder_paths.py", line 225, in get_filename_list
out = get_filename_list_(folder_name)
File "/app/server/projects/stop-wars/comfyui/folder_paths.py", line 192, in get_filename_list_
folders = folder_names_and_paths[folder_name]
KeyError: 'ultralytics_bbox'

[ERROR] An error occurred while retrieving information for the 'Fooocus samLoaderPipe' node.
Traceback (most recent call last):
File "/app/server/projects/stop-wars/comfyui/server.py", line 415, in get_object_info
out[x] = node_info(x)
File "/app/server/projects/stop-wars/comfyui/server.py", line 393, in node_info
info['input'] = obj_class.INPUT_TYPES()
File "/app/server/projects/stop-wars/comfyui/custom_nodes/Fooocus_Nodes/py/fooocusNodes.py", line 1398, in INPUT_TYPES
"model_name": (folder_paths.get_filename_list("sams"),),
File "/app/server/projects/stop-wars/comfyui/folder_paths.py", line 225, in get_filename_list
out = get_filename_list_(folder_name)
File "/app/server/projects/stop-wars/comfyui/folder_paths.py", line 192, in get_filename_list_
folders = folder_names_and_paths[folder_name]
KeyError: 'sams'

172.17.0.1 - - [28/May/2024 03:32:10] "GET /api/settings HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:32:10] "GET /api/projects HTTP/1.1" 200 -
56%|█████▌ | 10/18 [10:23<08:19, 62.42s/it]got prompt
172.17.0.1 - - [28/May/2024 03:38:18] "GET /api/settings HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:38:18] "GET /api/projects HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:38:23] "POST /api/projects/stop-wars/stop HTTP/1.1" 200 -
172.17.0.1 - - [28/May/2024 03:38:23] "GET /api/projects HTTP/1.1" 200 -
72%|███████▏ | 13/18 [13:30<05:12, 62.49s/it]172.17.0.1 - - [28/May/2024 03:38:33] "GET /api/projects HTTP/1.1" 200 -

Command line arguments like in Fooocus

Hello.
Very much missing command line arguments like in Fooocus. I solved the problem as follows, I added lines to the "./custom_nodes/Fooocus_Nodes/py/ldm_patched/modules/model_management.py" file:
image
But it keeps disappearing when updating, very inconvenient. Perhaps there is another way? If not, I suggest adding a json config file with startup settings.

Refiner cannot start normally

Error occurred when executing Fooocus PreKSampler:

No such file or directory: "/sdXL_v10RefinerVAEFix.safetensors"

After selecting the refiner, I always get this error. I am sure that I have put the refiner in the checkpoints folder, and Fooocus Loader can read it, but I still get the error. What should I do?

Error - Upscaling triggers during Inpainting with a torch.load error (mask size is also not being detected correctly)

Strange one. I am using your Inpainting Workflow and trying to incorporate other nodes which send the image and mask separately into the Fooocus Inpaint node. It works under some conditions but then a very slight change to the mask will trigger the following error:

!!! Exception during processing!!! 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

I will put the entire log below, but both the successful and failed runs reach the steps:
[Fooocus] Encoding negative #1 ...
[Fooocus] Image processing ...
At this point, the successful attempt will go to VAE Inpaint encoding. The failed one will attempt, "Upscaling image with shape (615, 615, 3)"

However, in both case, the image is 1024X1024, the mask is also 1024X1024, and Fooocus loader is set to 1024X1024. I made a few tests and discovered that if I take a failed mask and add something at the edge, like a gray line, then the mask processes correctly. Even when the process works correctly it often incorrectly shows the latent size - 1024 X 960 in the example below.

I think that there may two issues here. The main error is probably caused by something in my Comfyui install not allowing the Upscale to work. Hopefully you have a suggestion for me to try here.

However, I think that there also may be a bug that is causing the upscaling to begin in the first place since I am sending a 1024X1024 mask. Alternatively, your program may be trying to crop around the mask and upscale for higher resolution. If so, it's still odd that it is not detecting the sizes correctly.

I can upload an image along with working and non-working masks if you wish. Let me know what to try for the error I am getting.

Thank you:

WORKING LOG:
got prompt
[rgthree] Using rgthree's optimized recursive execution.
[Parameters] Adaptive CFG = 7.0
[Parameters] Sharpness = 2.0
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 827575702105265

PS: PS not connected

[Fooocus] Downloading upscale models ...
Downloading inpainter ...
[Inpaint] Current inpaint model is C:\A1111\StabilityMatrix\Packages\ComfyUI\models\inpaint\inpaint_v26.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 24
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Synthetic Refiner Activated
Synthetic Refiner Activated
Request to load LoRAs [] for model [C:\A1111\StabilityMatrix\Models\StableDiffusion\SDXL\juggernautXL_v8Rundiffusion.safetensors].
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Image processing ...
[Fooocus] VAE Inpaint encoding ...
[Fooocus] VAE encoding ..
Final resolution is (1024, 1024), latent is (1024, 960).
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1024, 960)
Preparation time: 2.88 seconds
[Fooocus] Moving model to GPU ...
Current Task 1 ……
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model

FAILED LOG:
[rgthree] Using rgthree's optimized recursive execution.

PS: PS not connected

[Parameters] Adaptive CFG = 7.0
[Parameters] Sharpness = 2.0
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 803864263111364
[Fooocus] Downloading upscale models ...
Downloading inpainter ...
[Inpaint] Current inpaint model is C:\A1111\StabilityMatrix\Packages\ComfyUI\models\inpaint\inpaint_v26.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 24
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Synthetic Refiner Activated
Synthetic Refiner Activated
Request to load LoRAs [] for model [C:\A1111\StabilityMatrix\Models\StableDiffusion\SDXL\juggernautXL_v8Rundiffusion.safetensors].
Requested to load GPT2LMHeadModel
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.20 seconds
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Image processing ...
Upscaling image with shape (615, 615, 3) ...
!!! Exception during processing!!! 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

Traceback (most recent call last):
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\serialization.py", line 531, in _check_seekable
f.seek(f.tell())
AttributeError: 'NoneType' object has no attribute 'seek'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\patch.py", line 465, in loader
result = original_loader(*args, **kwargs)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\serialization.py", line 986, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\serialization.py", line 440, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\serialization.py", line 425, in init
_check_seekable(buffer)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\serialization.py", line 534, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\serialization.py", line 527, in raise_err_msg
raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\fooocusNodes.py", line 556, in fooocus_preKSampler
inpaint_worker.current_task = inpaint_worker.InpaintWorker(
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\inpaint_worker.py", line 162, in init
self.interested_image = perform_upscale(self.interested_image)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\upscaler.py", line 21, in perform_upscale
sd = torch.load(model_filename)
File "C:\A1111\StabilityMatrix\Packages\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\patch.py", line 481, in loader
raise ValueError(exp)
ValueError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.

Model cache problem

Hi,
I am using inpainting feature of Fooocus_Nodes. I found in log that each time I do inpainting, it loads model or moving model, which costs about 3 to 4 seconds.

Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 4.02 seconds

I have set --gpu-only with ComfyUI and use_model_cache = True with Fooocus_Nodes.

Last update make erros and not run

Hi @Seedsa ,

Since I got the last update today, I got this error :
`[Fooocus] Initializing ...
[Fooocus] Loading models ...
!!! Exception during processing!!! E:\00_ComfyUI_Fooocus\ComfyUI\models\fooocus_expansion does not appear to have a file named config.json. Checkout 'https://huggingface.co/E:\00_ComfyUI_Fooocus\ComfyUI\models\fooocus_expansion/tree/None' for available files.
Traceback (most recent call last):
File "E:\00_ComfyUI_Fooocus\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\00_ComfyUI_Fooocus\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\00_ComfyUI_Fooocus\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\00_ComfyUI_Fooocus\ComfyUI\custom_nodes\Fooocus_Nodes\py\fooocusNodes.py", line 423, in fooocus_preKSampler
pipeline.refresh_everything(
File "E:\00_ComfyUI_Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\00_ComfyUI_Fooocus\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\00_ComfyUI_Fooocus\ComfyUI\custom_nodes\Fooocus_Nodes\py\modules\default_pipeline.py", line 247, in refresh_everything
final_expansion = FooocusExpansion()
File "E:\00_ComfyUI_Fooocus\ComfyUI\custom_nodes\Fooocus_Nodes\py\extras\expansion.py", line 39, in init
self.tokenizer = AutoTokenizer.from_pretrained(path_fooocus_expansion)
File "E:\00_ComfyUI_Fooocus\python_embeded\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 837, in from_pretrained
config = AutoConfig.from_pretrained(
File "E:\00_ComfyUI_Fooocus\python_embeded\lib\site-packages\transformers\models\auto\configuration_auto.py", line 934, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "E:\00_ComfyUI_Fooocus\python_embeded\lib\site-packages\transformers\configuration_utils.py", line 632, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "E:\00_ComfyUI_Fooocus\python_embeded\lib\site-packages\transformers\configuration_utils.py", line 689, in _get_config_dict
resolved_config_file = cached_file(
File "E:\00_ComfyUI_Fooocus\python_embeded\lib\site-packages\transformers\utils\hub.py", line 370, in cached_file
raise EnvironmentError(
OSError: E:\00_ComfyUI_Fooocus\ComfyUI\models\fooocus_expansion does not appear to have a file named config.json. Checkout 'https://huggingface.co/E:\00_ComfyUI_Fooocus\ComfyUI\models\fooocus_expansion/tree/None' for available files.

Prompt executed in 4.77 seconds
`
The file is present in the folder :
image

What I can do to fix this ?
Thanks,

In between, I'll roll back

Matt

Index out of bounds when combining ImagePrompt + CPDS

Getting error when merging CPDS and ImagePrompt. Maybe I'm doing something wrong with the nodes? I've attached them here

Error occurred when executing Fooocus KSampler:

list index out of range

File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/workspace/ComfyUI/execution.py", line 81, in get_output_data

return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/workspace/ComfyUI/custom_nodes/Fooocus_Nodes/py/fooocusNodes.py", line 763, in ksampler
results = previewimage.save_images(
File "/workspace/ComfyUI/nodes.py", line 1404, in save_images
full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, self.output_dir, images[0].shape[1], images[0].shape[0])

imagePrompt_CPDS (1).json

Added openpose v2 SDXL and depth map SDXL to controlnet for Fooocus Node (not an issue, a mssing feature)

Hi,

I would like to know if you can put Openpose v2 and depth controlnet for SDXL to your Fooocus controlnet ?

Your nodes are really good, use them all the time and found some way to merge with standard nodes and processes but this is really missing to Fooocus original and to your nodes.

It can help greatly on pose and characters without have some part of the original image in the final generated image, like cloth on a metal part because the reference image have cloth but want to get the pose.

Thank you for your work / nodes and hope my request can be put inside your Fooocus controlnet.

Have a good day,

Matt

node types were not found

When loading the graph, the following node types were not found:
Fooocus Loader 🔗
Nodes that have failed to load will show as red on the graph.

Fails to load at startup No module named 'modules.sdxl_styles'

error while loadding:

Traceback (most recent call last):
File "c:\comfyui\ComfyUI\nodes.py", line 1879, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\comfyui\ComfyUI\custom_nodes\Fooocus_Nodes_init
.py", line 23, in
imported_module = importlib.import_module(
File "importlib_init_.py", line 126, in import_module
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\comfyui\ComfyUI\custom_nodes\Fooocus_Nodes\py\api.py", line 8, in
from modules.sdxl_styles import legal_style_names
ModuleNotFoundError: No module named 'modules.sdxl_styles'

Cannot import C:\comfyui\ComfyUI\custom_nodes\Fooocus_Nodes module for custom nodes: No module named 'modules.sdxl_styles'

What could be the issue? Where can I get that module ?

Error occurred when executing Fooocus Upscale

Error occurred when executing Fooocus Upscale:

Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

File "C:\AI\ComfyUI-aki-v1.1\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\AI\ComfyUI-aki-v1.1\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\AI\ComfyUI-aki-v1.1\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\fooocusNodes.py", line 879, in FooocusUpscale
imgs = pipeline.process_diffusion(
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\modules\default_pipeline.py", line 362, in process_diffusion
sampled_latent = core.ksampler(
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\modules\core.py", line 309, in ksampler
samples = ldm_patched.modules.sample.sample(model,
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\samplers.py", line 712, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\modules\sample_hijack.py", line 157, in sample_hacked
samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\samplers.py", line 557, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\k_diffusion\sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\k_diffusion\sampling.py", line 613, in sample_dpmpp_2m_sde
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\modules\patch.py", line 321, in patched_KSamplerX0Inpaint_forward
out = self.inner_model(x, sigma,
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in call_impl
return forward_call(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\samplers.py", line 271, in forward
return self.apply_model(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\samplers.py", line 268, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\modules\patch.py", line 237, in patched_sampling_function
positive_x0, negative_x0 = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\samplers.py", line 222, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\model_base.py", line 85, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\modules\patch.py", line 395, in patched_unet_forward
emb = self.time_embed(t_emb)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\custom_nodes\Fooocus_Nodes\py\ldm_patched\modules\ops.py", line 27, in forward
return super().forward(*args, **kwargs)
File "C:\AI\ComfyUI-aki-v1.1\python\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
return F.linear(input, self.weight, self.bias)

help!

Error occurred when executing Fooocus KSampler: stack expects a non-empty TensorList

It runs successfully at first time, but error occurred at second time and more.
Here is the error code:
Error occurred when executing Fooocus KSampler:

stack expects a non-empty TensorList

File "/home/studio-lab-user/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/studio-lab-user/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/studio-lab-user/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/studio-lab-user/ComfyUI/custom_nodes/Fooocus_Nodes/py/fooocusNodes.py", line 764, in ksampler
base_image = torch.stack([tensor.squeeze() for tensor in all_imgs])

image

How to resolve it?
Thank you.

something's wrong with node paths

[ERROR] An error occurred while retrieving information for the 'Fooocus ultralyticsDetectorPipe' node.
Traceback (most recent call last):
  File "/Users/alex/ComfyUI/server.py", line 434, in get_object_info
    out[x] = node_info(x)
  File "/Users/alex/ComfyUI/server.py", line 412, in node_info
    info['input'] = obj_class.INPUT_TYPES()
  File "/Users/alex/ComfyUI/custom_nodes/Fooocus_Nodes/py/fooocusNodes.py", line 1376, in INPUT_TYPES
    bboxs = ["bbox/" + x for x in folder_paths.get_filename_list("ultralytics_bbox")]
  File "/Users/alex/ComfyUI/folder_paths.py", line 228, in get_filename_list
    out = get_filename_list_(folder_name)
  File "/Users/alex/ComfyUI/folder_paths.py", line 195, in get_filename_list_
    folders = folder_names_and_paths[folder_name]
KeyError: 'ultralytics_bbox'

[ERROR] An error occurred while retrieving information for the 'Fooocus samLoaderPipe' node.
Traceback (most recent call last):
  File "/Users/alex/ComfyUI/server.py", line 434, in get_object_info
    out[x] = node_info(x)
  File "/Users/alex/ComfyUI/server.py", line 412, in node_info
    info['input'] = obj_class.INPUT_TYPES()
  File "/Users/alex/ComfyUI/custom_nodes/Fooocus_Nodes/py/fooocusNodes.py", line 1406, in INPUT_TYPES
    "model_name": (folder_paths.get_filename_list("sams"),),
  File "/Users/alex/ComfyUI/folder_paths.py", line 228, in get_filename_list
    out = get_filename_list_(folder_name)
  File "/Users/alex/ComfyUI/folder_paths.py", line 195, in get_filename_list_
    folders = folder_names_and_paths[folder_name]
KeyError: 'sams'

TypeError: SAMLoader.load_model() takes from 1 to 2 positional arguments but 3 were given

When i tried detailer_fix.json, Error eccured, Help please!

Error occurred when executing Fooocus samLoaderPipe:

SAMLoader.load_model() takes from 1 to 2 positional arguments but 3 were given

File "/home/studio-lab-user/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/studio-lab-user/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/home/studio-lab-user/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/home/studio-lab-user/ComfyUI/custom_nodes/Fooocus_Nodes/py/fooocusNodes.py", line 1429, in doit
(sam_model,) = cls().load_model(model_name, device_mode)

image

TypeError: ModelPatcher.unpatch_model() got an unexpected keyword argument 'unpatch_weights'

Since comfyui update today, I have been getting this error:
Traceback (most recent call last):

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute

output_data, output_ui = get_output_data(obj, input_data_all)

                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data

return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list

results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\Fooocus_Nodes\py\fooocusNodes.py", line 956, in image_prompt

task[0] = ip_adapter.preprocess(image, ip_adapter_path=ip_adapter_path)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context

return func(*args, **kwargs)

       ^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context

return func(*args, **kwargs)

       ^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Fooocus\py\extras\ip_adapter.py", line 171, in preprocess

model_management.load_model_gpu(clip_vision.patcher)

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 442, in load_model_gpu

return load_models_gpu([model])

       ^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Fooocus\py\fooocus_modules\patch.py", line 441, in patched_load_models_gpu

y = comfy.model_management.load_models_gpu_origin(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 436, in load_models_gpu

cur_loaded_model = loaded_model.model_load(lowvram_model_memory)

                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 302, in model_load

self.model_unload()

File "C:\Users\nux\Desktop\ComfyUI_windows_portable\ComfyUI\comfy\model_management.py", line 312, in model_unload

self.model.unpatch_model(self.model.offload_device, unpatch_weights=unpatch_weights)

TypeError: ModelPatcher.unpatch_model() got an unexpected keyword argument 'unpatch_weights'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.