Giter Site home page Giter Site logo

comfyui_magicclothing's People

Contributors

frankchieng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

comfyui_magicclothing's Issues

Getting black image as output everytime.

I'm trying to run the workflow and it's generating the complete black image, I even tried to run it on CPU still had this issue. Bellow is the log on cmd when I run the workflow:

E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-05-17 15:08:48.897653
** Platform: Windows
** Python version: 3.11.8 (tags/v3.11.8:db85d51, Feb  6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
** Python executable: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\python.exe
** Log path: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\comfyui.log

Prestartup times for custom nodes:
   0.3 seconds: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 6144 MB, total RAM 16150 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : cudaMallocAsync
VAE dtype: torch.float32
Using pytorch cross attention
### Loading: ComfyUI-Manager (V2.34)
### ComfyUI Revision: 2178 [ece5acb8] | Released on '2024-05-12'
E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(

Import times for custom nodes:
   0.0 seconds: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
   0.2 seconds: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing
   0.3 seconds: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
got prompt
E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
text_encoder\model.safetensors not found
Loading pipeline components...:  57%|█████████████████████████████▋                      | 4/7 [00:01<00:01,  2.27it/s]E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:06<00:00,  1.12it/s]
----checkpoints loaded from path: E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\checkpoints\cloth_segm.pth----
E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py:3809: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\attention_processor.py:1244: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
  hidden_states = F.scaled_dot_product_attention(
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [04:03<00:00, 12.18s/it]
E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py:456: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ..\aten\src\ATen\native\cudnn\Conv_v8.cpp:919.)
  return F.conv2d(input, weight, bias, self.stride,
E:\magic_clothing\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\image_processor.py:90: RuntimeWarning: invalid value encountered in cast
  images = (images * 255).round().astype("uint8")
Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed.
Prompt executed in 277.47 seconds

Here are the parameters I'm using:
image

大佬这个错误怎么解决

File "M:\comfyui\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "M:\comfyui\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "M:\comfyui\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "M:\comfyui\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 140, in garment_generation
pipe.load_lora_weights(ip_lora)
File "E:\anaconda\envs\comfyui1\Lib\site-packages\diffusers\loaders\lora.py", line 117, in load_lora_weights
self.load_lora_into_unet(
File "E:\anaconda\envs\comfyui1\Lib\site-packages\diffusers\loaders\lora.py", line 460, in load_lora_into_unet
inject_adapter_in_model(lora_config, unet, adapter_name=adapter_name)
File "E:\anaconda\envs\comfyui1\Lib\site-packages\peft\mapping.py", line 163, in inject_adapter_in_model
peft_model = tuner_cls(model, peft_config, adapter_name=adapter_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\anaconda\envs\comfyui1\Lib\site-packages\peft\tuners\lora\model.py", line 111, in init
super().init(model, config, adapter_name)
File "E:\anaconda\envs\comfyui1\Lib\site-packages\peft\tuners\tuners_utils.py", line 90, in init
self.inject_adapter(self.model, adapter_name)
File "E:\anaconda\envs\comfyui1\Lib\site-packages\peft\tuners\tuners_utils.py", line 247, in inject_adapter
self.create_and_replace(peft_config, adapter_name, target, target_name, parent, **optional_kwargs)
File "E:\anaconda\envs\comfyui1\Lib\site-packages\peft\tuners\lora\model.py", line 168, in create_and_replace
from .bnb import Linear8bitLt
File "E:\anaconda\envs\comfyui1\Lib\site-packages\peft\tuners\lora\bnb.py", line 19, in
import bitsandbytes as bnb
File "E:\anaconda\envs\comfyui1\Lib\site-packages\bitsandbytes_init
.py", line 6, in
from . import cuda_setup, utils, research
File "E:\anaconda\envs\comfyui1\Lib\site-packages\bitsandbytes\research_init
.py", line 1, in
from . import nn
File "E:\anaconda\envs\comfyui1\Lib\site-packages\bitsandbytes\research\nn_init_.py", line 1, in
from .modules import LinearFP8Mixed, LinearFP8Global
File "E:\anaconda\envs\comfyui1\Lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in
from bitsandbytes.optim import GlobalOptimManager
File "E:\anaconda\envs\comfyui1\Lib\site-packages\bitsandbytes\optim_init_.py", line 6, in
from bitsandbytes.cextension import COMPILED_WITH_CUDA
File "E:\anaconda\envs\comfyui1\Lib\site-packages\bitsandbytes\cextension.py", line 20, in
raise RuntimeError('''

how to fix this

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "F:\UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 92, in garment_generation
result = ip_model.generate(cloth_image, face_image, cloth_mask_image, prompt, a_prompt, n_prompt, num_samples, seed, scale, cloth_guidance_scale, sample_steps, height, width)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\garment_ipadapter_faceid.py", line 306, in generate
images = self.pipe(
^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\pipelines\OmsDiffusionPipeline.py", line 221, in call
if self.interrupt:
^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\configuration_utils.py", line 138, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'OmsDiffusionPipeline' object has no attribute 'interrupt'

Prompt executed in 77.53 seconds

how to solve this problem?

Error occurred when executing MagicClothing_Generate:

Linear.forward() takes 2 positional arguments but 3 were given

File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 177, in garment_generation
images, cloth_mask_image = full_net.generate(cloth_image, cloth_mask_image, prompt, a_prompt, num_samples, n_prompt, seed, scale, cloth_guidance_scale, sample_steps, height, width)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\garment_diffusion.py", line 92, in generate
self.ref_unet(torch.cat([cloth_embeds] * num_images_per_prompt), 0, prompt_embeds_null, cross_attention_kwargs={"attn_store": self.attn_store})
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1228, in forward
sample, res_samples = downsample_block(
^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1380, in forward
hidden_states = attn(
^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\transformers\transformer_2d.py", line 442, in forward
hidden_states = block(
^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\attention.py", line 329, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\attention_processor.py", line 519, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\attention_processor.py", line 321, in call
query = attn.to_q(hidden_states, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

with simple workflow

got prompt
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.31s/it]
----checkpoints loaded from path: F:\UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\checkpoints\cloth_segm.pth----
0%| | 0/20 [00:00<?, ?it/s]
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "F:\UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 177, in garment_generation
images, cloth_mask_image = full_net.generate(cloth_image, cloth_mask_image, prompt, a_prompt, num_samples, n_prompt, seed, scale, cloth_guidance_scale, sample_steps, height, width)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\garment_diffusion.py", line 96, in generate
images = self.pipe(
^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\pipelines\OmsDiffusionPipeline.py", line 221, in call
if self.interrupt:
^^^^^^^^^^^^^^
File "F:\UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\configuration_utils.py", line 138, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'OmsDiffusionPipeline' object has no attribute 'interrupt'

Prompt executed in 179.08 seconds

Only female models are available

I'm trying to add a man in business suit to a male model but it all comes female ones for "magic_clothing_768_....safetensors" model. Is that normal?

stabilityai/sd-vae-ft-mse does not appear to have a file named config.json?

Error occurred when executing MagicClothing_Generate:

stabilityai/sd-vae-ft-mse does not appear to have a file named config.json.

File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_MagicClothing-main\nodes.py", line 192, in garment_generation
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse").to(dtype=torch.float16)
File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\diffusers\models\modeling_utils.py", line 569, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "E:\ComfyUI-aki\ComfyUI-aki-v1.3\python\lib\site-packages\diffusers\configuration_utils.py", line 402, in load_config
raise EnvironmentError(

A small suggestion

A small suggestion, instead of putting all models in the model folder of comfyui, don’t take your ipa model from your own folder. Take it from the public model, otherwise it will be saved twice, which wastes a lot of space, or even It would be better if you let people fill in the model paths themselves.

PEFT Backend & Local Variable "images" referenced before assigment

I did go through issues section to see if there is solution but seems like those issues are still open too.

I did install
pip install -U peft transformers

It didnt solve the PEFT backend problem. but when i disconnect my image from face image input from Human Garment Generation node peft error is gone.

Screenshot - 2024-04-26 21 25 34

But this time i get second error which i couldnt resolve, im attaching below

Screenshot - 2024-04-26 21 19 59

I would appreciate the help,

Thank you

image and image_mask must have the same image size

hello very good, someone has the solution to this problem, I have porbado to change sizes, and formats of the photos, I have tried with various resolutions but always gives me the same error by the way I have the models oms_diffusion_inpaint_100000_notext. safetensors and oms_diffusion_768_200000.safetensors in the directory: (E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\checkpoints\early_access):

Error occurred when executing MagicClothing_Inpainting:

image and image_mask must have the same image size

File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 101, in cloth_inpainting
control_img = make_inpaint_condition(person_image,person_mask)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 41, in make_inpaint_condition
assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

License Issue

Hi,
We appreaciate your contribution to Magic Clothing ComfyUI, but please check our offical license and copy it to your project. And please avoid making profits from this.

Thank you!

OpenPose does not work standalone

I tried to run a workflow by using only openpose and the cloth but ti gives me the error .

local variable 'images' referenced before assignment
at

line 303, in garment_generation
    images = np.array(images).astype(np.float32) / 255.0

I think the problem is the fact that you did not handle the condition: "if face_image is None and pose_image is not None:"

ERROR:

['oms_diffusion_768_200000.safetensors']
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████| 5/5 [00:04<00:00, 1.00it/s]
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):

Error occurred when executing MagicClothing_Generate

Hi,
how can I fix this error ?

Error occurred when executing MagicClothing_Generate:

int() argument must be a string, a bytes-like object or a real number, not 'Image'

File "E:\Ai_test\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\Ai_test\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\Ai_test\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\Ai_test\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 179, in garment_generation
images = np.array(images).astype(np.float32) / 255.0

Custom Stable Diffusion checkpoint

Can we have the base Stable diffusion checkpoint as a variable/input of the node?
At the moment it defaults to realistic vision v4. It would be nice to try others without tweaking the code.
Moreover, it would be really great to use checkpoints we already have locally, without depending on huggingface cache. Quite often I already have the checkpoint elsewhere and I am forced to have additional GBs of the same one, because of this methodology.

Error occurred when executing MagicClothing_Generate:

Error occurred when executing MagicClothing_Generate:

'str' object cannot be interpreted as an integer

File "C:\Users\bashi\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bashi\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bashi\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bashi\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 301, in garment_generation
images, cloth_mask_image = full_net.generate(cloth_image, cloth_mask_image, prompt, a_prompt, num_samples, n_prompt, seed, scale, cloth_guidance_scale, sample_steps, height, width)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bashi\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\garment_diffusion.py", line 72, in generate
cloth = prepare_image(cloth_image, height, width)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bashi\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\utils\utils.py", line 33, in prepare_image
image = [i.resize((width, height), resample=PIL.Image.LANCZOS) for i in image]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bashi\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\utils\utils.py", line 33, in
image = [i.resize((width, height), resample=PIL.Image.LANCZOS) for i in image]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\bashi\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 2193, in resize
return self._new(self.im.resize(size, resample, box))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

file location

how i can change my file location from c cache to other drive?

模型地址

为啥不把模型地址引用为原始地址呢,model放在comfyui下的model,ipadater放在model/ipadapter下面,按照作者地址岂不是要多复制很多模型文件到你的插件里

stabilityai/sd-vae-ft-mse does not appear???

Error occurred when executing MagicClothing_Generate:

stabilityai/sd-vae-ft-mse does not appear to have a file named config.json.

File "E:\comfyui\ComfyUI-aki-v1.1\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\comfyui\ComfyUI-aki-v1.1\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\comfyui\ComfyUI-aki-v1.1\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\comfyui\ComfyUI-aki-v1.1\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 68, in garment_generation
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse").to(dtype=torch.float16)
File "E:\comfyui\ComfyUI-aki-v1.1\python\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "E:\comfyui\ComfyUI-aki-v1.1\python\lib\site-packages\diffusers\models\modeling_utils.py", line 712, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
File "E:\comfyui\ComfyUI-aki-v1.1\python\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "E:\comfyui\ComfyUI-aki-v1.1\python\lib\site-packages\diffusers\configuration_utils.py", line 402, in load_config
raise EnvironmentError(

Could you please tell me how to resolve this issue with the VAE?

?

Traceback (most recent call last):
File "X:_comfy\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1813, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "X:_comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing_init
.py", line 1, in
from .nodes import GarmentGenerate
File "X:_comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 10, in
from .garment_adapter.garment_diffusion import ClothAdapter
File "X:_comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\garment_diffusion.py", line 10, in
from .attention_processor import REFAttnProcessor2_0 as REFAttnProcessor
File "X:_comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\attention_processor.py", line 6, in
from diffusers.utils import USE_PEFT_BACKEND
ImportError: cannot import name 'USE_PEFT_BACKEND' from 'diffusers.utils' (X:_comfy\ComfyUI_windows_portable\python_embeded\lib\site-packages\diffusers\utils_init_.py)

Cannot import X:_comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing module for custom nodes: cannot import name 'USE_PEFT_BACKEND' from 'diffusers.utils' (X:_comfy\ComfyUI_windows_portable\python_embeded\lib\site-packages\diffusers\utils_init_.py)

Error occurred when executing MagicClothing_Generate:

How to fix it,
Error occurred when executing MagicClothing_Generate:

'str' object cannot be interpreted as an integer

File "C:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 233, in garment_generation
result = ip_model.generate(cloth_image, face_image, cloth_mask_image, prompt, a_prompt, n_prompt, num_samples, seed, scale, cloth_guidance_scale, sample_steps, height, width, shortcut=v2)
File "C:\ComfyUI\custom_nodes\ComfyUI_MagicClothing\garment_adapter\garment_ipadapter_faceid.py", line 470, in generate
cloth = prepare_image(cloth_image, height, width)
File "C:\ComfyUI\custom_nodes\ComfyUI_MagicClothing\utils\utils.py", line 33, in prepare_image
image = [i.resize((width, height), resample=PIL.Image.LANCZOS) for i in image]
File "C:\ComfyUI\custom_nodes\ComfyUI_MagicClothing\utils\utils.py", line 33, in
image = [i.resize((width, height), resample=PIL.Image.LANCZOS) for i in image]
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL\Image.py", line 2222, in resize
return self._new(self.im.resize(size, resample, box))

关于requirements.txt版本号的建议

感谢作者的辛苦付出,comfyui的繁荣生态全是你们的功劳。
有一个建议,这个requirements.txt文档中的依赖及其对应的具体版本号,作者能否不做限制或者实在有个别限制,用最低版本号是否可以?否则可能导致其他已安装项目的依赖都给冲突了
冲突截图

Basic workflow works, but getting error when adding FaceID or OpenPose

I'm using the supplied workflows.

  • The basic one works great.
  • FaceID version gets the error below.
  • OpenPose + FaceID version, same error.
  • Disconnected FaceID, left openPose connected, error occurs.
  • When disconnecting FaceID AND OpenPose, it completes the generation.

Checked other issues, I updated comfy and checked that numpy is current.


Error occurred when executing MagicClothing_Generate:

stat: path should be string, bytes, os.PathLike or integer, not NoneType

File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 87, in garment_generation
pipe.load_lora_weights(ip_lora)
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\loaders\lora.py", line 109, in load_lora_weights
state_dict, network_alphas = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\loaders\lora.py", line 235, in lora_state_dict
weight_name = cls._best_guess_weight_name(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\loaders\lora.py", line 311, in _best_guess_weight_name
if os.path.isfile(pretrained_model_name_or_path_or_dict):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 30, in isfile

Install with Comfy Manager failed

Seems that the --extra-index-url https://download.pytorch.org/whl/cu118 in requirements.txt is breaking the Comfy Manager installation. I don't think it's a good practice to specify the device

Unable to install

Error message:

Install(git-clone) error: https://github.com/frankchieng/ComfyUI_MagicClothing / invalid version number '2.1.1+cu118'

nvidia-smi output:

Fri May 17 10:06:13 2024       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.125.06   Driver Version: 525.125.06   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A16-16Q      On   | 00000000:06:00.0 Off |                    0 |
| N/A   N/A    P8    N/A /  N/A |   2265MiB / 16384MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA A16-16Q      On   | 00000000:07:00.0 Off |                    0 |
| N/A   N/A    P8    N/A /  N/A |      2MiB / 16384MiB |      0%      Default |
|                               |                      |             Disabled |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A     42216      C   ...s-comfyui/venv/bin/python     2263MiB |
+-----------------------------------------------------------------------------+

pip list output:

Package                   Version
------------------------- -----------
aiohttp                   3.9.5
aiosignal                 1.3.1
albumentations            1.4.7
aniso8601                 9.0.1
annotated-types           0.6.0
async-timeout             4.0.3
attrs                     23.2.0
blinker                   1.8.2
boto3                     1.34.107
botocore                  1.34.107
certifi                   2024.2.2
charset-normalizer        3.3.2
click                     8.1.7
coloredlogs               15.0.1
colorlog                  6.8.2
contourpy                 1.2.1
cycler                    0.12.1
Cython                    3.0.10
easydict                  1.13
einops                    0.8.0
filelock                  3.14.0
Flask                     3.0.3
flask-restx               1.3.0
flatbuffers               24.3.25
fonttools                 4.51.0
frozenlist                1.4.1
fsspec                    2024.5.0
fuzzywuzzy                0.18.0
gitdb                     4.0.11
GitPython                 3.1.43
huggingface-hub           0.23.0
humanfriendly             10.0
idna                      3.7
imageio                   2.34.1
importlib_resources       6.4.0
insightface               0.7.3
itsdangerous              2.2.0
Jinja2                    3.1.4
jmespath                  1.0.1
joblib                    1.4.2
jsonschema                4.22.0
jsonschema-specifications 2023.12.1
kiwisolver                1.4.5
kornia                    0.7.2
kornia_rs                 0.1.3
lazy_loader               0.4
Levenshtein               0.25.1
loguru                    0.7.2
MarkupSafe                2.1.5
matplotlib                3.9.0
mpmath                    1.3.0
multidict                 6.0.5
networkx                  3.3
numpy                     1.26.4
nvidia-cublas-cu12        12.1.3.1
nvidia-cuda-cupti-cu12    12.1.105
nvidia-cuda-nvrtc-cu12    12.1.105
nvidia-cuda-runtime-cu12  12.1.105
nvidia-cudnn-cu12         8.9.2.26
nvidia-cufft-cu12         11.0.2.54
nvidia-curand-cu12        10.3.2.106
nvidia-cusolver-cu12      11.4.5.107
nvidia-cusparse-cu12      12.1.0.106
nvidia-nccl-cu12          2.20.5
nvidia-nvjitlink-cu12     12.4.127
nvidia-nvtx-cu12          12.1.105
onnx                      1.16.0
onnxruntime-gpu           1.17.1
opencv-python-headless    4.9.0.80
packaging                 24.0
pillow                    10.3.0
pip                       22.0.2
prettytable               3.10.0
protobuf                  5.26.1
psutil                    5.9.8
pydantic                  2.7.1
pydantic_core             2.18.2
pyparsing                 3.1.2
python-dateutil           2.9.0.post0
python-Levenshtein        0.25.1
pytz                      2024.1
PyYAML                    6.0.1
rapidfuzz                 3.9.0
referencing               0.35.1
regex                     2024.5.15
requests                  2.31.0
rpds-py                   0.18.1
s3transfer                0.10.1
safetensors               0.4.3
scikit-image              0.23.2
scikit-learn              1.4.2
scipy                     1.13.0
setuptools                59.6.0
six                       1.16.0
smmap                     5.0.1
sympy                     1.12
threadpoolctl             3.5.0
tifffile                  2024.5.10
tokenizers                0.19.1
torch                     2.3.0
torchaudio                2.3.0+cu118
torchsde                  0.2.6
torchvision               0.18.0
tqdm                      4.66.1
trampoline                0.1.2
transformers              4.40.2
triton                    2.3.0
typing_extensions         4.11.0
urllib3                   2.2.1
wcwidth                   0.2.13
websocket-client          1.7.0
Werkzeug                  3.0.3
yarl                      1.9.4

how to fix this

Error occurred when executing MagicClothing_Generate:

stat: path should be string, bytes, os.PathLike or integer, not NoneType

File "/kaggle/working/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/kaggle/working/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/kaggle/working/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/kaggle/working/ComfyUI/custom_nodes/ComfyUI_MagicClothing/nodes.py", line 140, in garment_generation
pipe.load_lora_weights(ip_lora)
File "/opt/conda/lib/python3.10/site-packages/diffusers/loaders/lora.py", line 110, in load_lora_weights
state_dict, network_alphas = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/diffusers/loaders/lora.py", line 236, in lora_state_dict
weight_name = cls._best_guess_weight_name(
File "/opt/conda/lib/python3.10/site-packages/diffusers/loaders/lora.py", line 312, in _best_guess_weight_name
if os.path.isfile(pretrained_model_name_or_path_or_dict):
File "/opt/conda/lib/python3.10/genericpath.py", line 30, in isfile
st = os.stat(path)

Hello, how to solve this problem

Error occurred when executing MagicClothing_Generate:

Linear.forward() takes 2 positional arguments but 3 were given

File "E:\ComfyUI-aki-v1.2\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\ComfyUI-aki-v1.2\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\ComfyUI-aki-v1.2\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\ComfyUI-aki-v1.2\custom_nodes\ComfyUI_MagicClothing\nodes.py", line 177, in garment_generation
images, cloth_mask_image = full_net.generate(cloth_image, cloth_mask_image, prompt, a_prompt, num_samples, n_prompt, seed, scale, cloth_guidance_scale, sample_steps, height, width)
File "E:\ComfyUI-aki-v1.2\custom_nodes\ComfyUI_MagicClothing\garment_adapter\garment_diffusion.py", line 92, in generate
self.ref_unet(torch.cat([cloth_embeds] * num_images_per_prompt), 0, prompt_embeds_null, cross_attention_kwargs={"attn_store": self.attn_store})
File "C:\Users\zhaog\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\zhaog\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\zhaog\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1216, in forward
sample, res_samples = downsample_block(
File "C:\Users\zhaog\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\zhaog\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\zhaog\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\models\unets\unet_2d_blocks.py", line 1279, in forward

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.