Giter Site home page Giter Site logo

anime-webui-colab's Introduction

anime-webui-colab's People

Contributors

nuroisea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

anime-webui-colab's Issues

Doesn't load the gradio.live screen

Notebook runs as always, but when I click on the public URL link, the screen doesn't load and a 504 error appears after a while. Tried this with Anything among others.

Error while downloading OrangeMixs models

They updated the names of their AbyssOrangeMix3 models on HuggingFace, so Colab is unable to download them.

In Colab, manually changing the line to this (or any other OrangeMixs model):

model = "AbyssOrangeMix3/AOM3A1B_orangemixs.safetensors"

Resolves the issue.

CamelliaMix model links inaccessible

So I was using the CamelliaMix model (specifically NSFW 1.1) for a couple of days and just yesterday it gave me an error message saying stable diffusion model failed to load and I thought nothing of it so this morning I got up and I tried loading every other version of the model, none of them would load either and I thought okay that's obviously a problem. Then my thought was "so is it just the web UI's problem or is it like specifically this model", I tested out other models and they worked just fine and then I realized "oh it's just this model specifically".
I tried to use an ngrok token and I got the same results. I don't know what the problem is but the main messages I do get when I try to load it are file not found and stable diffusion model failed to load.
Maybe they changed the name of it?
Screenshot_20230708-130840~2
Screenshot_20230708-130840

EDIT: I just tried opening their HuggingFace site and nothing was there!
Screenshot_20230708-133030
But! I found another model by the same name, maybe it could be a good replacement?
Screenshot_20230708-133200

VAE is not working, what happened?

I tried for almost 4 hours trying to solve VAE problem here, but it's useless. Always when I upload a VAE with PYOM* WebUI Colab it's said error couldn't load or it's said it has two PCs connected or something. And no, I'm not using twice. It works normal, but always when I'm putting a VAE it happens.

Traceback (most recent call last):
  File "/content/stable-diffusion-webui/modules/call_queue.py", line 55, in f
    res = list(func(*args, **kwargs))
  File "/content/stable-diffusion-webui/modules/call_queue.py", line 35, in f
    res = func(*args, **kwargs)
  File "/content/stable-diffusion-webui/modules/txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "/content/stable-diffusion-webui/modules/processing.py", line 620, in process_images
    res = process_images_inner(p)
  File "/content/stable-diffusion-webui/modules/processing.py", line 729, in process_images_inner
    p.setup_conds()
  File "/content/stable-diffusion-webui/modules/processing.py", line 1126, in setup_conds
    super().setup_conds()
  File "/content/stable-diffusion-webui/modules/processing.py", line 346, in setup_conds
    self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, self.negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
  File "/content/stable-diffusion-webui/modules/processing.py", line 338, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "/content/stable-diffusion-webui/modules/prompt_parser.py", line 143, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 665, in get_learned_conditioning
    c = self.cond_stage_model.encode(c)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 135, in encode
    return self(text)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 125, in forward
    outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer == "hidden")
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
    return self.text_model(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 708, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 223, in forward
    inputs_embeds = self.token_embedding(input_ids)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 162, in forward
    return F.embedding(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2210, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

No reason crash on starting

Log

๐Ÿ‘ Utility script imported.
๐ŸŒŸ Installing stable-diffusion-webui...
๐Ÿ“ฆ Installing 12 extensions...
  โ”” aspect-ratio-preset
  โ”” batchlinks
  โ”” cutoff
  โ”” dynamic-thresholding
  โ”” images-browser
  โ”” latent-couple-two-shot
  โ”” session-organizer
  โ”” state
  โ”” tagcomplete
  โ”” tiled-multidiffusion-upscaler
  โ”” tokenizer
  โ”” tunnels
๐Ÿ”ง Fetching configs...
๐Ÿ’‰ Fetching embeddings...
๐Ÿฉน Applying web UI Colab patches...
๐Ÿฉน Applying Colab memory patches...
env: LD_PRELOAD=/content/libtcmalloc_minimal.so.4
๐Ÿ“ฆ Installing aria2...
โฌ Downloading anything-v4.0-pruned.safetensors to /content/stable-diffusion-webui/models/Stable-diffusion...
โฌ Downloading anything-v4.0.vae.pt to /content/stable-diffusion-webui/models/VAE...
/content/stable-diffusion-webui
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Version: ## 1.4.0
Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning K-diffusion into /content/stable-diffusion-webui/repositories/k-diffusion...
Cloning CodeFormer into /content/stable-diffusion-webui/repositories/CodeFormer...
Cloning BLIP into /content/stable-diffusion-webui/repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements
Installing ImageReward requirement for image browser


Installing pycloudflared

Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0-pruned.safetensors --vae-path /content/stable-diffusion-webui/models/VAE/anything-v4.0.vae.pt --share
No module 'xformers'. Proceeding without it.
Image Browser: ImageReward is not installed, cannot be used.
Image Browser: Creating database
Image Browser: Database created
Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0-pruned.safetensors
Running on local URL:  http://127.0.0.1:7860/
preload_extensions_git_metadata for 19 extensions took 0.71s
Running on public URL: https://cf382503c7db26d2d6.gradio.live/

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Startup time: 23.3s (import torch: 6.8s, import gradio: 0.9s, import ldm: 0.5s, other imports: 1.8s, setup codeformer: 0.2s, load scripts: 1.9s, create ui: 1.5s, gradio launch: 9.6s).
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (โ€ฆ)olve/main/vocab.json: 100% 961k/961k [00:00<00:00, 6.00MB/s]
Downloading (โ€ฆ)olve/main/merges.txt: 100% 525k/525k [00:00<00:00, 6.25MB/s]
Downloading (โ€ฆ)cial_tokens_map.json: 100% 389/389 [00:00<00:00, 1.72MB/s]
Downloading (โ€ฆ)okenizer_config.json: 100% 905/905 [00:00<00:00, 4.29MB/s]
Downloading (โ€ฆ)lve/main/config.json: 100% 4.52k/4.52k [00:00<00:00, 16.8MB/s]
Loading VAE weights from commandline argument: /content/stable-diffusion-webui/models/VAE/anything-v4.0.vae.pt
Applying attention optimization: sdp... done.
Textual inversion embeddings loaded(7): bad-artist, bad-artist-anime, bad-hands-5, bad-image-v2-39000, bad_prompt_version2, EasyNegative, EasyNegativeV2
Model loaded in 49.8s (load weights from disk: 33.8s, create model: 2.0s, apply weights to model: 3.2s, apply half(): 3.4s, load VAE: 4.9s, move model to device: 2.3s, calculate empty prompt: 0.2s).

Settings

[ ]
Select a model before running

โญ Model selection [[?]](https://github.com/NUROISEA/anime-webui-colab/wiki/Selecting-a-model)
It could take 4-7 minutes to see a link to the web UI, please be patient! :)

model:

anything-v4.0-pruned.safetensors
Select ControlNet models to use [[?]](https://github.com/NUROISEA/anime-webui-colab/wiki/Selecting-ControlNet-models)

controlnet:

none
๐Ÿ”ง Web UI settings
This option only affects the first launch [[?]](https://github.com/NUROISEA/anime-webui-colab/wiki/First-launch-web-UI-settings)

webui_version:

stable
extensions_version:

stable
This saves your generation to Google Drive [[?]](https://github.com/NUROISEA/anime-webui-colab/wiki/Saving-outputs-to-Google-Drive)

outputs_to_drive:

output_drive_folder:
AI/Generated
Change tunnels if you have connection issues with gradio [[?]](https://github.com/NUROISEA/anime-webui-colab/wiki/Using-different-tunnels)

tunnel:

gradio
ngrok_token:
ะ’ัั‚ะฐะฒัŒั‚ะต ะทะฝะฐั‡ะตะฝะธะต (text)
ngrok_region:

auto
Did something break? Please report it on [Github](https://github.com/NUROISEA/anime-webui-colab/issues) or fill out this [Google Form](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fforms.gle%2FdnaJojPR4yqvr9dN8) (no sign-in needed)!

After a starting a model, it works 5-10 mins and stopped without error.
link:
https://colab.research.google.com/github/NUROISEA/anime-webui-colab/blob/main/notebooks/anything_v4.ipynb

OrangeMix & MixPro Notebooks Disconnects Under 10 Minutes Consistently

๐Ÿ‘ AOM3_orangemixs.safetensors already downloaded.
๐Ÿ‘ orangemix.vae.pt already downloaded.
/content/stable-diffusion-webui
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Version: ## 1.4.0
Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
Installing requirements

Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/stable-diffusion-webui/models/Stable-diffusion/AOM3_orangemixs.safetensors --vae-path /content/stable-diffusion-webui/models/VAE/orangemix.vae.pt --share
No module 'xformers'. Proceeding without it.
Image Browser: ImageReward is not installed, cannot be used.
Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/AOM3_orangemixs.safetensors
Running on local URL: http://127.0.0.1:7860/
preload_extensions_git_metadata for 19 extensions took 0.70s
Running on public URL: https://4099e1029036ce0944.gradio.live/

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Startup time: 19.3s (import torch: 7.0s, import gradio: 1.1s, import ldm: 0.4s, other imports: 0.8s, load scripts: 1.5s, create ui: 1.9s, gradio launch: 6.4s).
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (โ€ฆ)olve/main/vocab.json: 100% 961k/961k [00:00<00:00, 9.62MB/s]
Downloading (โ€ฆ)olve/main/merges.txt: 100% 525k/525k [00:00<00:00, 57.6MB/s]
Downloading (โ€ฆ)cial_tokens_map.json: 100% 389/389 [00:00<00:00, 1.64MB/s]
Downloading (โ€ฆ)okenizer_config.json: 100% 905/905 [00:00<00:00, 3.74MB/s]
Downloading (โ€ฆ)lve/main/config.json: 100% 4.52k/4.52k [00:00<00:00, 14.5MB/s]
Loading VAE weights from commandline argument: /content/stable-diffusion-webui/models/VAE/orangemix.vae.pt
Applying attention optimization: sdp... done.
Textual inversion embeddings loaded(7): bad-artist, bad-artist-anime, bad-hands-5, bad-image-v2-39000, bad_prompt_version2, EasyNegative, EasyNegativeV2
Model loaded in 33.5s (load weights from disk: 22.9s, create model: 5.6s, load VAE: 4.5s, calculate empty prompt: 0.2s).
Couldn't find Lora with name Rabbit (wlsdnjs950)
Couldn't find Lora with name aki
Couldn't find Lora with name 3DMM_V11
0% 0/35 [00:00<?, ?it/s]
3% 1/35 [00:04<02:26, 4.31s/it]
6% 2/35 [00:05<01:16, 2.33s/it]
9% 3/35 [00:06<00:54, 1.71s/it]
11% 4/35 [00:07<00:43, 1.40s/it]
14% 5/35 [00:08<00:36, 1.20s/it]
17% 6/35 [00:08<00:31, 1.09s/it]
20% 7/35 [00:09<00:28, 1.01s/it]
23% 8/35 [00:10<00:25, 1.06it/s]
26% 9/35 [00:11<00:23, 1.11it/s]
29% 10/35 [00:12<00:21, 1.14it/s]
31% 11/35 [00:12<00:20, 1.16it/s]
34% 12/35 [00:13<00:19, 1.18it/s]
37% 13/35 [00:14<00:18, 1.19it/s]
40% 14/35 [00:15<00:17, 1.20it/s]
43% 15/35 [00:16<00:16, 1.21it/s]
46% 16/35 [00:17<00:15, 1.21it/s]
49% 17/35 [00:17<00:14, 1.21it/s]
51% 18/35 [00:18<00:14, 1.21it/s]
54% 19/35 [00:19<00:13, 1.20it/s]
57% 20/35 [00:20<00:12, 1.19it/s]
60% 21/35 [00:21<00:11, 1.19it/s]
63% 22/35 [00:22<00:10, 1.19it/s]
66% 23/35 [00:22<00:10, 1.17it/s]
69% 24/35 [00:23<00:09, 1.18it/s]
71% 25/35 [00:24<00:08, 1.17it/s]
74% 26/35 [00:25<00:07, 1.17it/s]
77% 27/35 [00:26<00:06, 1.18it/s]
80% 28/35 [00:27<00:05, 1.19it/s]
83% 29/35 [00:28<00:05, 1.19it/s]
86% 30/35 [00:28<00:04, 1.20it/s]
89% 31/35 [00:29<00:03, 1.21it/s]
91% 32/35 [00:30<00:02, 1.21it/s]
94% 33/35 [00:31<00:01, 1.21it/s]
97% 34/35 [00:32<00:00, 1.22it/s]
100% 35/35 [00:32<00:00, 1.08it/s]

Total progress: 100% 35/35 [00:45<00:00, 1.30s/it]
{"prompt": "<lora:Rabbit (wlsdnjs950):0.8> lora:aki:0.3 lora:3DMM_V11:0.5 3d, thick eyebrows, gradient, gradient background, barefoot, standing, petite, little female, smile, \n", "all_prompts": ["<lora:Rabbit (wlsdnjs950):0.8> lora:aki:0.3 lora:3DMM_V11:0.5 3d, thick eyebrows, gradient, gradient background, barefoot, standing, petite, little female, smile, \n"], "negative_prompt": "(worst quality, low quality:1.4) Poorly Made Bad 3D, Lousy Bad Realistic, bad anatomy, bad hands, extra fingers, fewer fingers,\n", "all_negative_prompts": ["(worst quality, low quality:1.4) Poorly Made Bad 3D, Lousy Bad Realistic, bad anatomy, bad hands, extra fingers, fewer fingers,\n"], "seed": 2434781936, "all_seeds": [2434781936], "subseed": 953420483, "all_subseeds": [953420483], "subseed_strength": 0, "width": 560, "height": 1024, "sampler_name": "DPM++ SDE Karras", "cfg_scale": 6.5, "steps": 35, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_hash": null, "seed_resize_from_w": 0, "seed_resize_from_h": 0, "denoising_strength": null, "extra_generation_params": {}, "index_of_first_image": 0, "infotexts": ["<lora:Rabbit (wlsdnjs950):0.8> lora:aki:0.3 lora:3DMM_V11:0.5 3d, thick eyebrows, gradient, gradient background, barefoot, standing, petite, little female, smile, \n\nNegative prompt: (worst quality, low quality:1.4) Poorly Made Bad 3D, Lousy Bad Realistic, bad anatomy, bad hands, extra fingers, fewer fingers,\n\nSteps: 35, Sampler: DPM++ SDE Karras, CFG scale: 6.5, Seed: 2434781936, Size: 560x1024, Model: AOM3_orangemixs, RNG: CPU, Version: ## 1.4.0"], "styles": [], "job_timestamp": "20230911015418", "clip_skip": 1, "is_using_inpainting_conditioning": false}
loading Lora /content/stable-diffusion-webui/models/Lora/Rabbit (wlsdnjs950).safetensors: SafetensorError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 253, in load_loras
lora = load_lora(name, lora_on_disk)
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 162, in load_lora
sd = sd_models.read_state_dict(lora_on_disk.filename)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 250, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

loading Lora /content/stable-diffusion-webui/models/Lora/porforever_v1.0.safetensors: SafetensorError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 253, in load_loras
lora = load_lora(name, lora_on_disk)
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 162, in load_lora
sd = sd_models.read_state_dict(lora_on_disk.filename)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 250, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

loading Lora /content/stable-diffusion-webui/models/Lora/3DMM_V11.safetensors: SafetensorError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 253, in load_loras
lora = load_lora(name, lora_on_disk)
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 162, in load_lora
sd = sd_models.read_state_dict(lora_on_disk.filename)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 250, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

0% 0/35 [00:00<?, ?it/s]
3% 1/35 [00:00<00:29, 1.15it/s]
6% 2/35 [00:01<00:29, 1.13it/s]
9% 3/35 [00:02<00:28, 1.14it/s]
11% 4/35 [00:03<00:26, 1.15it/s]
14% 5/35 [00:04<00:26, 1.14it/s]
17% 6/35 [00:05<00:25, 1.14it/s]
20% 7/35 [00:06<00:24, 1.15it/s]
23% 8/35 [00:06<00:23, 1.15it/s]
26% 9/35 [00:07<00:22, 1.15it/s]
29% 10/35 [00:08<00:21, 1.15it/s]
31% 11/35 [00:09<00:20, 1.14it/s]
34% 12/35 [00:10<00:20, 1.15it/s]
37% 13/35 [00:11<00:19, 1.12it/s]
40% 14/35 [00:12<00:18, 1.12it/s]
43% 15/35 [00:13<00:17, 1.11it/s]
46% 16/35 [00:14<00:17, 1.10it/s]
49% 17/35 [00:15<00:16, 1.09it/s]
51% 18/35 [00:15<00:15, 1.10it/s]
54% 19/35 [00:16<00:14, 1.10it/s]
57% 20/35 [00:17<00:13, 1.10it/s]
60% 21/35 [00:18<00:12, 1.11it/s]
63% 22/35 [00:19<00:11, 1.12it/s]
66% 23/35 [00:20<00:10, 1.13it/s]
69% 24/35 [00:21<00:09, 1.14it/s]
71% 25/35 [00:22<00:08, 1.15it/s]
74% 26/35 [00:23<00:07, 1.15it/s]
77% 27/35 [00:23<00:06, 1.16it/s]
80% 28/35 [00:24<00:06, 1.15it/s]
83% 29/35 [00:25<00:05, 1.16it/s]
86% 30/35 [00:26<00:04, 1.14it/s]
89% 31/35 [00:27<00:03, 1.15it/s]
91% 32/35 [00:28<00:02, 1.15it/s]
94% 33/35 [00:29<00:01, 1.15it/s]
97% 34/35 [00:29<00:00, 1.16it/s]
100% 35/35 [00:30<00:00, 1.15it/s]

error: subprocess-exited-with-error

mix-pro-v3 notebook (unsure if others are affected)

Python 3.9.16 (main, Dec  7 2022, 01:11:51) 
[GCC 9.4.0]
Commit hash: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
Installing gfpgan
Installing clip
Traceback (most recent call last):
  File "/content/stable-diffusion-webui/launch.py", line 380, in <module>
    prepare_environment()
  File "/content/stable-diffusion-webui/launch.py", line 293, in prepare_environment
    run_pip(f"install {clip_package}", "clip")
  File "/content/stable-diffusion-webui/launch.py", line 145, in run_pip
    return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
  File "/content/stable-diffusion-webui/launch.py", line 113, in run
    raise RuntimeError(message)
RuntimeError: Couldn't install clip.
Command: "/usr/bin/python3" -m pip install git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 --prefer-binary
Error code: 1
stdout: Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
  Cloning https://github.com/openai/CLIP.git (to revision d50d76daa670286dd6cacf3bcd80b5e4823fc8e1) to /tmp/pip-req-build-qb6um2us

stderr:   Running command git clone --filter=blob:none --quiet https://github.com/openai/CLIP.git /tmp/pip-req-build-qb6um2us
  fatal: unable to access 'https://github.com/openai/CLIP.git/': Failed to connect to github.com port 443: Connection refused
  warning: Clone succeeded, but checkout failed.
  You can inspect what was checked out with 'git status'
  and retry with 'git restore --source=HEAD :/'

  error: subprocess-exited-with-error
  
  ร— git clone --filter=blob:none --quiet https://github.com/openai/CLIP.git /tmp/pip-req-build-qb6um2us did not run successfully.
  โ”‚ exit code: 128
  โ•ฐโ”€> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

ร— git clone --filter=blob:none --quiet https://github.com/openai/CLIP.git /tmp/pip-req-build-qb6um2us did not run successfully.
โ”‚ exit code: 128
โ•ฐโ”€> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

NameError: name 'utility' is not defined

NameError Traceback (most recent call last)
in <cell line: 43>()
43 try:
---> 44 utility
45 except NameError:

NameError: name 'utility' is not defined

During handling of the above exception, another exception occurred:

UnboundLocalError Traceback (most recent call last)
1 frames
/content/utility.py in log_usage(key)
47
48 def log_usage(key):
---> 49 if disabled_logging:
50 return
51

UnboundLocalError: local variable 'disabled_logging' referenced before assignment

(one of) ControlNet Extensions causing the webui to be not functional

โฌ Downloading anything-v4.5-pruned.safetensors to /content/stable-diffusion-webui/models/Stable-diffusion...
๐Ÿ‘ anything-v4.0.vae.pt already downloaded.
โŒ› This might take a while! Grab a ๐Ÿฟ or something xD

๐Ÿ“ข These models are FP16, btw. ;)

๐Ÿค™ Downloading 8 ControlNet v1.0 files/models...
๐Ÿ‘ control_canny.safetensors already downloaded.
๐Ÿ‘ control_depth.safetensors already downloaded.
๐Ÿ‘ control_hed.safetensors already downloaded.
๐Ÿ‘ control_mlsd.safetensors already downloaded.
๐Ÿ‘ control_normal.safetensors already downloaded.
๐Ÿ‘ control_openpose.safetensors already downloaded.
๐Ÿ‘ control_scribble.safetensors already downloaded.
๐Ÿ‘ control_seg.safetensors already downloaded.
/content/stable-diffusion-webui
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Jun  7 2023, 12:45:35) [GCC 9.4.0]
Version: ## 1.4.0
Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
Installing requirements





Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/stable-diffusion-webui/models/Stable-diffusion/anything-v4.5-pruned.safetensors --vae-path /content/stable-diffusion-webui/models/VAE/anything-v4.0.vae.pt --share
No module 'xformers'. Proceeding without it.
2023-07-18 02:07:20,179 - ControlNet - INFO - ControlNet v1.1.232
ControlNet preprocessor location: /content/stable-diffusion-webui/extensions/controlnet/annotator/downloads
2023-07-18 02:07:20,343 - ControlNet - INFO - ControlNet v1.1.232
Image Browser: ImageReward is not installed, cannot be used.
Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/anything-v4.5-pruned.safetensors
Running on local URL:  http://127.0.0.1:7860/
preload_extensions_git_metadata for 23 extensions took 0.93s
Running on public URL: https://afccdf8f9763638b98.gradio.live/

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Startup time: 19.9s (import torch: 8.3s, import gradio: 0.9s, import ldm: 0.5s, other imports: 1.5s, setup codeformer: 0.1s, load scripts: 2.1s, create ui: 2.7s, gradio launch: 3.7s).
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 419, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 84, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/gzip.py", line 44, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
    raise e
  File "/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 66, in app
    response = await func(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 237, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 165, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 271, in api_info
    return gradio.blocks.get_api_info(config, serialize)  # type: ignore
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 504, in get_api_info
    serializer = serializing.COMPONENT_MAPPING[type]()
KeyError: 'dataset'
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights from commandline argument: /content/stable-diffusion-webui/models/VAE/anything-v4.0.vae.pt
Applying attention optimization: sdp... done.
Textual inversion embeddings loaded(7): bad-artist, bad-artist-anime, bad-hands-5, bad-image-v2-39000, bad_prompt_version2, EasyNegative, EasyNegativeV2
Model loaded in 21.6s (load weights from disk: 13.9s, create model: 3.2s, load VAE: 4.1s, calculate empty prompt: 0.2s).

Clean up colab UI

Currently the UI looks like this

image

Proposed changes

image

This occurred to me while working on #14 since the notebooks looks like a pain to edit.

An added benefit that this change could bring is the ease of editing several notebooks at once and easier to add new settings (if I intend to add more)

It makes the cell a lot easier to the eyes, but at the cost of user friendliness (but IMO a cleaner UI would be better for clueless people)

"ModuleNotFoundError"

Hi, I tried to open the webui today and got this error and I'm not sure how to handle it, I'd be thankful for any help provided :

Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/models/hssk.safetensors --ckpt-dir /content/models --vae-dir /content/VAE --vae-path /content/VAE/kl-f8-anime2.ckpt --remotemoe
2024-03-16 23:52:51.712061: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-16 23:52:51.712178: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-03-16 23:52:51.843661: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
/usr/local/lib/python3.10/dist-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Parallel'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
/usr/local/lib/python3.10/dist-packages/gradio_client/documentation.py:103: UserWarning: Could not get documentation group for <class 'gradio.mix.Series'>: No known documentation group for module 'gradio.mix'
warnings.warn(f"Could not get documentation group for {cls}: {exc}")
No module 'xformers'. Proceeding without it.
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /content/stable-diffusion-webui/launch.py:38 in โ”‚
โ”‚ โ”‚
โ”‚ 35 โ”‚
โ”‚ 36 โ”‚
โ”‚ 37 if name == "main": โ”‚
โ”‚ โฑ 38 โ”‚ main() โ”‚
โ”‚ 39 โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/launch.py:34 in main โ”‚
โ”‚ โ”‚
โ”‚ 31 โ”‚ if args.test_server: โ”‚
โ”‚ 32 โ”‚ โ”‚ configure_for_tests() โ”‚
โ”‚ 33 โ”‚ โ”‚
โ”‚ โฑ 34 โ”‚ start() โ”‚
โ”‚ 35 โ”‚
โ”‚ 36 โ”‚
โ”‚ 37 if name == "main": โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/modules/launch_utils.py:340 in start โ”‚
โ”‚ โ”‚
โ”‚ 337 โ”‚
โ”‚ 338 def start(): โ”‚
โ”‚ 339 โ”‚ print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with argum โ”‚
โ”‚ โฑ 340 โ”‚ import webui โ”‚
โ”‚ 341 โ”‚ os.system(f"""sed -i -e "s/dict()))$/dict())).cuda()/gm" /content/stable-diffusion-w โ”‚
โ”‚ 342 โ”‚ if '--nowebui' in sys.argv: โ”‚
โ”‚ 343 โ”‚ โ”‚ webui.api_only() โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/webui.py:49 in โ”‚
โ”‚ โ”‚
โ”‚ 46 โ”‚ torch.long_version = torch.version โ”‚
โ”‚ 47 โ”‚ torch.version = re.search(r'[\d.]+[\d]', torch.version).group(0) โ”‚
โ”‚ 48 โ”‚
โ”‚ โฑ 49 from modules import shared, sd_samplers, upscaler, extensions, localization, ui_tempdir, โ”‚
โ”‚ 50 import modules.codeformer_model as codeformer โ”‚
โ”‚ 51 import modules.face_restoration โ”‚
โ”‚ 52 import modules.gfpgan_model as gfpgan โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/modules/ui_extra_networks.py:7 in โ”‚
โ”‚ โ”‚
โ”‚ 4 โ”‚
โ”‚ 5 from modules import shared โ”‚
โ”‚ 6 from modules.images import read_info_from_image, save_image_with_geninfo โ”‚
โ”‚ โฑ 7 from modules.ui import up_down_symbol โ”‚
โ”‚ 8 import gradio as gr โ”‚
โ”‚ 9 import json โ”‚
โ”‚ 10 import html โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/modules/ui.py:26 in โ”‚
โ”‚ โ”‚
โ”‚ 23 โ”‚
โ”‚ 24 import modules.codeformer_model โ”‚
โ”‚ 25 import modules.generation_parameters_copypaste as parameters_copypaste โ”‚
โ”‚ โฑ 26 import modules.gfpgan_model โ”‚
โ”‚ 27 import modules.hypernetworks.ui โ”‚
โ”‚ 28 import modules.scripts โ”‚
โ”‚ 29 import modules.shared as shared โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/modules/gfpgan_model.py:4 in โ”‚
โ”‚ โ”‚
โ”‚ 1 import os โ”‚
โ”‚ 2 โ”‚
โ”‚ 3 import facexlib โ”‚
โ”‚ โฑ 4 import gfpgan โ”‚
โ”‚ 5 โ”‚
โ”‚ 6 import modules.face_restoration โ”‚
โ”‚ 7 from modules import paths, shared, devices, modelloader, errors โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/gfpgan/init.py:2 in โ”‚
โ”‚ โ”‚
โ”‚ 1 # flake8: noqa โ”‚
โ”‚ โฑ 2 from .archs import * โ”‚
โ”‚ 3 from .data import * โ”‚
โ”‚ 4 from .models import * โ”‚
โ”‚ 5 from .utils import * โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/gfpgan/archs/init.py:2 in โ”‚
โ”‚ โ”‚
โ”‚ 1 import importlib โ”‚
โ”‚ โฑ 2 from basicsr.utils import scandir โ”‚
โ”‚ 3 from os import path as osp โ”‚
โ”‚ 4 โ”‚
โ”‚ 5 # automatically scan and import arch modules for registry โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/basicsr/init.py:4 in โ”‚
โ”‚ โ”‚
โ”‚ 1 # https://github.com/xinntao/BasicSR โ”‚
โ”‚ 2 # flake8: noqa โ”‚
โ”‚ 3 from .archs import * โ”‚
โ”‚ โฑ 4 from .data import * โ”‚
โ”‚ 5 from .losses import * โ”‚
โ”‚ 6 from .metrics import * โ”‚
โ”‚ 7 from .models import * โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/basicsr/data/init.py:22 in โ”‚
โ”‚ โ”‚
โ”‚ 19 data_folder = osp.dirname(osp.abspath(file)) โ”‚
โ”‚ 20 dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v โ”‚
โ”‚ 21 # import all the dataset modules โ”‚
โ”‚ โฑ 22 _dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name i โ”‚
โ”‚ 23 โ”‚
โ”‚ 24 โ”‚
โ”‚ 25 def build_dataset(dataset_opt): โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/basicsr/data/init.py:22 in โ”‚
โ”‚ โ”‚
โ”‚ 19 data_folder = osp.dirname(osp.abspath(file)) โ”‚
โ”‚ 20 dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v โ”‚
โ”‚ 21 # import all the dataset modules โ”‚
โ”‚ โฑ 22 _dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name i โ”‚
โ”‚ 23 โ”‚
โ”‚ 24 โ”‚
โ”‚ 25 def build_dataset(dataset_opt): โ”‚
โ”‚ โ”‚
โ”‚ /usr/lib/python3.10/importlib/init.py:126 in import_module โ”‚
โ”‚ โ”‚
โ”‚ 123 โ”‚ โ”‚ โ”‚ if character != '.': โ”‚
โ”‚ 124 โ”‚ โ”‚ โ”‚ โ”‚ break โ”‚
โ”‚ 125 โ”‚ โ”‚ โ”‚ level += 1 โ”‚
โ”‚ โฑ 126 โ”‚ return _bootstrap._gcd_import(name[level:], package, level) โ”‚
โ”‚ 127 โ”‚
โ”‚ 128 โ”‚
โ”‚ 129 _RELOADING = {} โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/basicsr/data/realesrgan_dataset.py:11 in โ”‚
โ”‚ โ”‚
โ”‚ 8 import torch โ”‚
โ”‚ 9 from torch.utils import data as data โ”‚
โ”‚ 10 โ”‚
โ”‚ โฑ 11 from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels โ”‚
โ”‚ 12 from basicsr.data.transforms import augment โ”‚
โ”‚ 13 from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor โ”‚
โ”‚ 14 from basicsr.utils.registry import DATASET_REGISTRY โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/basicsr/data/degradations.py:8 in โ”‚
โ”‚ โ”‚
โ”‚ 5 import torch โ”‚
โ”‚ 6 from scipy import special โ”‚
โ”‚ 7 from scipy.stats import multivariate_normal โ”‚
โ”‚ โฑ 8 from torchvision.transforms.functional_tensor import rgb_to_grayscale โ”‚
โ”‚ 9 โ”‚
โ”‚ 10 # -------------------------------------------------------------------- # โ”‚
โ”‚ 11 # --------------------------- blur kernels --------------------------- # โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor'

Please Fix CamelliaMix!

I tried to run the model of CamelliaMix but non of them working, there's no any checkpoints. I think the address of the models have been changed. Thanks.

Animagine-XL does not work upon relaunching

WebUI version and extensions version both latest.

Logs:
Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/stable-diffusion-webui/models/Stable-diffusion/animagine-xl-3.1.safetensors --share
2024-04-15 08:07:10.044843: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-04-15 08:07:10.044920: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-04-15 08:07:10.050320: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/animagine-xl-3.1.safetensors
/content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:414: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  arc_calc_height = gr.Button(value="Calculate Height").style(
/content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:414: GradioDeprecationWarning: Use `scale` in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not.
  arc_calc_height = gr.Button(value="Calculate Height").style(
/content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:422: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  arc_calc_width = gr.Button(value="Calculate Width").style(
/content/stable-diffusion-webui/extensions/aspect-ratio-preset/scripts/sd-webui-ar.py:422: GradioDeprecationWarning: Use `scale` in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not.
  arc_calc_width = gr.Button(value="Calculate Width").style(
/content/stable-diffusion-webui/extensions/latent-couple-two-shot/scripts/two_shot.py:130: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  visual_regions = gr.Gallery(label="Regions").style(grid=(4, 4, 4, 8), height="auto")
/content/stable-diffusion-webui/extensions/latent-couple-two-shot/scripts/two_shot.py:130: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead.
  visual_regions = gr.Gallery(label="Regions").style(grid=(4, 4, 4, 8), height="auto")
Running on local URL:  http://127.0.0.1:7860/
Running on public URL: https://bc2a44aac1d64916d0.gradio.live/

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Startup time: 31.4s (prepare environment: 6.0s, import torch: 8.1s, import gradio: 2.2s, setup paths: 7.4s, initialize shared: 0.3s, other imports: 1.1s, load scripts: 1.6s, create ui: 2.8s, gradio launch: 1.9s).
Creating model from config: /content/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
creating model quickly: NotImplementedError
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/content/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/content/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/content/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
    load_model()
  File "/content/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()
  File "/usr/local/lib/python3.10/dist-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 73, in cuda
    return super().cuda(device=device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in <lambda>
    return self._apply(lambda t: t.cuda(device))
NotImplementedError: Cannot copy out of meta tensor; no data!

Failed to create model quickly; will retry using slow method.
loading stable diffusion model: NotImplementedError
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/content/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/content/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/content/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
    load_model()
  File "/content/stable-diffusion-webui/modules/sd_models.py", line 732, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()
  File "/usr/local/lib/python3.10/dist-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 73, in cuda
    return super().cuda(device=device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in <lambda>
    return self._apply(lambda t: t.cuda(device))
NotImplementedError: Cannot copy out of meta tensor; no data!


Stable diffusion model failed to load
Applying attention optimization: sdp... done.
Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/animagine-xl-3.1.safetensors
Creating model from config: /content/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
creating model quickly: NotImplementedError
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/content/stable-diffusion-webui/modules/ui.py", line 1154, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "/content/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/content/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
    load_model()
  File "/content/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()
  File "/usr/local/lib/python3.10/dist-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 73, in cuda
    return super().cuda(device=device)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in <lambda>
    return self._apply(lambda t: t.cuda(device))
NotImplementedError: Cannot copy out of meta tensor; no data!

Main takeaway is the NotImplementedError: Cannot copy out of meta tensor; no data!

Lora Block Weight + latest issues

After nearly a month I finally stop being lazy to present the problem lol
I always liked to use Lora Block Weight, but many months ago it stopped working. It used to be one of the pillar of my generator art, so without it it's not as good as my prime days

As for the issue about Lora Block Weight, it says

Error running process_batch: /content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/scripts.py", line 490, in process_batch
script.process_batch(p, *script_args,
kwargs)
File "/content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 511, in process_batch
if not self.isnet: loradealer(self, o_prompts ,self.lratios,self.elementals)
File "/content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 836, in loradealer
te,unet = multidealer(te,unet)
File "/content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 926, in multidealer
return float(t),float(u)
ValueError: could not convert string to float: 'face'

This value error called 'face' is a thing that I insert inside the Block Weight tab:

face:1,0,0,0,0,0,0,0,0.8,1,1,0.2,0,0,0,0,0
ware:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0
pose:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0

then I'm supposed to put these codes into Open TextEditor, which isn't possible I assume as I click on it and nothing appears, so I always have put it.... hmmmm, it was like where are those numbers if you already have put it in your SD before, then I save and load tags and presets; always have worked. But then, it stopped, like I said.

For the link, it's: https://github.com/hako-mikan/sd-webui-lora-block-weight.git
I install it in Extensions > Install from URL
then I always restart the SD so that it works.

About how to use, it's
lora:ModelName:0.6:face
lora:ModelName:0.6:ware
lora:ModelName:0.6:pose

If we could fix this, would be very meaningful to me

About the latest version of Stable Diffusion, my problem is

Stable diffusion model failed to load
changing setting sd_vae to kl-f8-anime2.ckpt: AttributeError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "/content/stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/initialize_util.py", line 175, in
shared.opts.onchange("sd_vae", wrap_queued_call(lambda: sd_vae.reload_vae_weights()), call=False)
File "/content/stable-diffusion-webui/modules/sd_vae.py", line 255, in reload_vae_weights
checkpoint_info = sd_model.sd_checkpoint_info
AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'

Loading weights [None] from /content/models/hssk.safetensors
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
creating model quickly: NotImplementedError
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/ui.py", line 1172, in
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "/content/stable-diffusion-webui/modules/shared_items.py", line 133, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 621, in get_sd_model
load_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 724, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 550, in init
super().init(conditioning_key=conditioning_key, *args, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 92, in init
self.model = DiffusionWrapper(unet_config, conditioning_key)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1314, in init
self.diffusion_model = instantiate_from_config(diff_model_config)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in
return self._apply(lambda t: t.cuda(device))
NotImplementedError: Cannot copy out of meta tensor; no data!

Failed to create model quickly; will retry using slow method.
loading stable diffusion model: NotImplementedError
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/content/stable-diffusion-webui/modules/ui.py", line 1172, in
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "/content/stable-diffusion-webui/modules/shared_items.py", line 133, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 621, in get_sd_model
load_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 733, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 550, in init
super().init(conditioning_key=conditioning_key, *args, **kwargs)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 92, in init
self.model = DiffusionWrapper(unet_config, conditioning_key)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1314, in init
self.diffusion_model = instantiate_from_config(diff_model_config)
File "/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict())).cuda()
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in cuda
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 802, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 825, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 911, in
return self._apply(lambda t: t.cuda(device))
NotImplementedError: Cannot copy out of meta tensor; no data!

Stable diffusion model failed to load

This problem is universal in the latest version for me, and that's why I've never been able to use it. I do all the steps in the Stable version, which works, but the latest never worked. Just after I try to generate something, this error happens.
In fact, I remember a single time that it worked (I don't remember how), but as soon as I restarted to install Block Weight, this error had returned

For now these are the two problems I have. The other one is that lora issue but I can get over it, but these two... damn

Bonus: man idk if it's because I use stable version but Lycoris never "worked" for me as well because the tab doesn't appear inside the SD so I always needed to install it manually in the lora folder of google drive ๐Ÿ˜ญ ๐Ÿ˜ญ ๐Ÿ˜ญ

RuntimeError: Cannot add middleware after an application has started

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Traceback (most recent call last):
File "launch.py", line 327, in
start()
File "launch.py", line 320, in start
webui.webui()
File "/content/stable-diffusion-webui/webui.py", line 224, in webui
app.add_middleware(GZipMiddleware, minimum_size=1000)
File "/usr/local/lib/python3.8/dist-packages/starlette/applications.py", line 135, in add_middleware
raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
Killing tunnel 127.0.0.1:7860 <> https://731ed925-7ded-4a3e.gradio.live/

Unify CLI patches for ease of editing

Problem

literally the same as #2 but for the !sed commands

Currently the colab needs some patches so the notebook doesn't ^C unexpectedly, so the following commands are required:

!sed -i -e '''/    prepare_environment()/a\    os.system\(f\"""sed -i -e ''\"s/self.logvar\\[t\\]/self.logvar\\[t.item()\\]/g\"'' /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py""")''' /content/stable-diffusion-webui/launch.py
!sed -i -e '''/    prepare_environment()/a\    os.system\(f\"""sed -i -e ''\"s/dict()))$/dict())).cuda()/gm\"'' /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py""")''' /content/stable-diffusion-webui/launch.py

Also since we are not using the latest commit of the webui, the ff is required for it to even launch:

!echo "fastapi==0.90.1" >> /content/stable-diffusion-webui/requirements_versions.txt

But what if I want to have the two-shot extension, and it requires another patch:

!git apply --ignore-whitespace extensions/stable-diffusion-webui-two-shot/cfg_denoised_callback-ea9bd9fc.patch

But what if we update the webui and we won't need that fastapi version, do we edit all notebooks again?

Solution (?)

Add those patches in configs/utility.py.

patches = [
  "paaaaaaaaaaaaaaaaaaaaaaatch",
]

But how do we deal with !sed commands? I'm too lazy to escape everything and minor editing would be a pain in the ass.

Add a new file configs/patch_list.txt which contains everything:

sed [...]
sed [,,,]
echo [,,,]

Probably figure out a way to fetch from a url and change the notebooks to this:

for patch in utility.patches:
  !{patch}

and configs/utility.py to this:

# hypothetical function to fetch a text file and return an array of lines
patches = fetch_text_file("configs/patch_list.txt")

Deforum fails to load

Error loading script: deforum.py
Traceback (most recent call last):
  File "/content/stable-diffusion-webui/modules/scripts.py", line 248, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "/content/stable-diffusion-webui/modules/script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 850, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/content/stable-diffusion-webui/extensions/ext-deforum/scripts/deforum.py", line 17, in <module>
    import deforum_helpers.args as deforum_args
ModuleNotFoundError: No module named 'deforum_helpers'

control net v1.1 is not working

    Traceback (most recent call last):
      File "/content/stable-diffusion-webui/modules/scripts.py", line 474, in process
        script.process(p, *script_args)
      File "/content/stable-diffusion-webui/extensions/controlnet/scripts/controlnet.py", line 736, in process
        model_net = Script.load_control_model(p, unet, unit.model, unit.low_vram)
      File "/content/stable-diffusion-webui/extensions/controlnet/scripts/controlnet.py", line 299, in load_control_model
        model_net = Script.build_control_model(p, unet, model, lowvram)
      File "/content/stable-diffusion-webui/extensions/controlnet/scripts/controlnet.py", line 365, in build_control_model
        assert os.path.exists(override_config), f'Error: The model config {override_config} is missing. ControlNet 1.1 must have configs.'
    AssertionError: Error: The model config /content/stable-diffusion-webui/extensions/controlnet/models/control_v11u_sd15_tile.yaml is missing. ControlNet 1.1 must have configs.

---

i saw on other git repositories that is an issue with config files naming but how can i change the config files ?

It attempted to "free" invalid pointer, thus stopping the Colab midway

๐Ÿ‘ Utility script imported.
๐Ÿฉน Applying Colab memory fix...
env: LD_PRELOAD=libtcmalloc.so
๐Ÿ”ผ Selected the latest version of the web UI.
๐ŸŒŸ Installing stable-diffusion-webui...
๐Ÿ”ผ Installing the latest versions of the extensions.
๐Ÿ“ฆ Installing 12 extensions...
โ”” aspect-ratio-preset
โ”” batchlinks
โ”” cutoff
โ”” dynamic-thresholding
โ”” images-browser
โ”” latent-couple-two-shot
โ”” session-organizer
โ”” state
โ”” tagcomplete
โ”” tiled-multidiffusion-upscaler
โ”” tokenizer
โ”” tunnels
๐Ÿ”ง Fetching configs...
๐Ÿ’‰ Fetching embeddings...
๐Ÿฉน Applying web UI Colab patches...
๐Ÿ“ฆ Installing aria2...
โฌ Downloading MIX-Pro-V4.safetensors to /content/stable-diffusion-webui/models/Stable-diffusion...
โฌ Downloading kl-f8-anime2.ckpt to /content/stable-diffusion-webui/models/VAE...
/content/stable-diffusion-webui
Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0]
Version: v1.4.1
Commit hash: f865d3e11647dfd6c7b2cdf90dde24680e58acd8
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning K-diffusion into /content/stable-diffusion-webui/repositories/k-diffusion...
Cloning CodeFormer into /content/stable-diffusion-webui/repositories/CodeFormer...
Cloning BLIP into /content/stable-diffusion-webui/repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements
Installing pycloudflared

Installing ImageReward requirement for image browser

Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/stable-diffusion-webui/models/Stable-diffusion/MIX-Pro-V4.safetensors --vae-path /content/stable-diffusion-webui/models/VAE/kl-f8-anime2.ckpt --share
src/tcmalloc.cc:283] Attempt to free invalid pointer 0x5b96c49d7740

Unify launch arguments for ease of editing

Problem

When changing a launch parameter all of the notebooks are modified. I don't want that because I'm too lazy for this.

Solution

Why not utilize configs/utility.py for this?

Change

args = [
  "--xformers [...]",
  # more stuff
]

to

args = [
  utility.default_args,
  # more stuff
]

and add the parameters in the configs/utility.py instead.

# configs/utility.py
default_args = "--xformers [...]"

But since adding more arguments will make the code scroll horizontally, why not do this instead:

#configs/utility.py
_args_array = [
  "--xformers",
  "--lowram",
  # more args
]

default_args = " ".join(_args_array)

Colab Notebooks Disconnect Within A Few Seconds, Repetitively

๐Ÿ‘ Utility script imported.
๐ŸŒŸ Installing stable-diffusion-webui...
๐Ÿ“ฆ Installing 12 extensions...
โ”” aspect-ratio-preset
โ”” batchlinks
โ”” cutoff
โ”” dynamic-thresholding
โ”” images-browser
โ”” latent-couple-two-shot
โ”” session-organizer
โ”” state
โ”” tagcomplete
โ”” tiled-multidiffusion-upscaler
โ”” tokenizer
โ”” tunnels
๐Ÿ”ง Fetching configs...
๐Ÿ’‰ Fetching embeddings...
๐Ÿฉน Applying web UI Colab patches...
๐Ÿฉน Applying Colab memory patches...
env: LD_PRELOAD=/content/libtcmalloc_minimal.so.4
๐Ÿ“ฆ Installing aria2...

Then the cell would disconnect. I would also try other notebooks, but they all had around the same result, disconnecting with a few seconds. Maybe it could be my connection, but I doubt it due to my connection being at full bars consistently. It's not the ram usage either which is an issue you fixed successfully. So I am genuinely confused on why the notebooks disconnect so incredibly quick.

WebUI not launching

account_circle
๐Ÿ‘ Utility script imported.
๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ IF YOU DONT HAVE COLAB PRO ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ THIS NOTEBOOK WILL STOP FUNCTIONING IMMEDIATELY ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ OR AT 10 MINUTES ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ OR RANDOMLY ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ you can ignore this if you have pro btw, but yeah ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ
๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ๐Ÿšจ
๐ŸŒŸ Installing stable-diffusion-webui...
๐Ÿ“ฆ Installing 12 extensions...
โ”” aspect-ratio-preset
โ”” batchlinks
โ”” cutoff
โ”” dynamic-thresholding
โ”” images-browser
โ”” latent-couple-two-shot
โ”” session-organizer
โ”” state
โ”” tagcomplete
โ”” tiled-multidiffusion-upscaler
โ”” tokenizer
โ”” tunnels
๐Ÿ”ง Fetching configs...
๐Ÿ’‰ Fetching embeddings...
๐Ÿฉน Applying web UI Colab patches...
๐Ÿฉน Applying Colab memory patches...
env: LD_PRELOAD=/content/libtcmalloc_minimal.so.4
๐Ÿ“ฆ Installing aria2...
โฌ Downloading Anything-V3-full-pruned.safetensors to /content/stable-diffusion-webui/models/Stable-diffusion...
โฌ Downloading anything.vae.pt to /content/stable-diffusion-webui/models/VAE...
/content/stable-diffusion-webui
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Version: ## 1.4.0
Commit hash: 394ffa7b0a7fff3ec484bcd084e673a8b301ccc8
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning K-diffusion into /content/stable-diffusion-webui/repositories/k-diffusion...
Cloning CodeFormer into /content/stable-diffusion-webui/repositories/CodeFormer...
Cloning BLIP into /content/stable-diffusion-webui/repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements
Installing pycloudflared

Installing ImageReward requirement for image browser

Launching Web UI with arguments: --opt-sdp-attention --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --gradio-queue --ckpt /content/stable-diffusion-webui/models/Stable-diffusion/Anything-V3-full-pruned.safetensors --vae-path /content/stable-diffusion-webui/models/VAE/anything.vae.pt --share
2023-11-05 16:18:36.509923: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-05 16:18:36.509983: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-05 16:18:36.510023: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /content/stable-diffusion-webui/launch.py:38 in โ”‚
โ”‚ โ”‚
โ”‚ 35 โ”‚
โ”‚ 36 โ”‚
โ”‚ 37 if name == "main": โ”‚
โ”‚ โฑ 38 โ”‚ main() โ”‚
โ”‚ 39 โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/launch.py:34 in main โ”‚
โ”‚ โ”‚
โ”‚ 31 โ”‚ if args.test_server: โ”‚
โ”‚ 32 โ”‚ โ”‚ configure_for_tests() โ”‚
โ”‚ 33 โ”‚ โ”‚
โ”‚ โฑ 34 โ”‚ start() โ”‚
โ”‚ 35 โ”‚
โ”‚ 36 โ”‚
โ”‚ 37 if name == "main": โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/modules/launch_utils.py:340 in start โ”‚
โ”‚ โ”‚
โ”‚ 337 โ”‚
โ”‚ 338 def start(): โ”‚
โ”‚ 339 โ”‚ print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with argum โ”‚
โ”‚ โฑ 340 โ”‚ import webui โ”‚
โ”‚ 341 โ”‚ os.system(f"""sed -i -e "s/dict()))$/dict())).cuda()/gm" /content/stable-diffusion-w โ”‚
โ”‚ 342 โ”‚ if '--nowebui' in sys.argv: โ”‚
โ”‚ 343 โ”‚ โ”‚ webui.api_only() โ”‚
โ”‚ โ”‚
โ”‚ /content/stable-diffusion-webui/webui.py:35 in โ”‚
โ”‚ โ”‚
โ”‚ 32 โ”‚
โ”‚ 33 startup_timer.record("import torch") โ”‚
โ”‚ 34 โ”‚
โ”‚ โฑ 35 import gradio โ”‚
โ”‚ 36 startup_timer.record("import gradio") โ”‚
โ”‚ 37 โ”‚
โ”‚ 38 import ldm.modules.encoders.modules # noqa: F401 โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/gradio/init.py:3 in โ”‚
โ”‚ โ”‚
โ”‚ 1 import pkgutil โ”‚
โ”‚ 2 โ”‚
โ”‚ โฑ 3 import gradio.components as components โ”‚
โ”‚ 4 import gradio.inputs as inputs โ”‚
โ”‚ 5 import gradio.outputs as outputs โ”‚
โ”‚ 6 import gradio.processing_utils โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/gradio/components.py:55 in โ”‚
โ”‚ โ”‚
โ”‚ 52 from PIL import Image as _Image # using _ to minimize namespace pollution โ”‚
โ”‚ 53 from typing_extensions import Literal โ”‚
โ”‚ 54 โ”‚
โ”‚ โฑ 55 from gradio import processing_utils, utils โ”‚
โ”‚ 56 from gradio.blocks import Block, BlockContext โ”‚
โ”‚ 57 from gradio.events import ( โ”‚
โ”‚ 58 โ”‚ Blurrable, โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/gradio/utils.py:339 in โ”‚
โ”‚ โ”‚
โ”‚ 336 โ”‚ return await iterator.anext() โ”‚
โ”‚ 337 โ”‚
โ”‚ 338 โ”‚
โ”‚ โฑ 339 class AsyncRequest: โ”‚
โ”‚ 340 โ”‚ """ โ”‚
โ”‚ 341 โ”‚ The AsyncRequest class is a low-level API that allow you to create asynchronous HTTP โ”‚
โ”‚ 342 โ”‚ Compared to making calls by using httpx directly, AsyncRequest offers several advant โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/gradio/utils.py:358 in AsyncRequest โ”‚
โ”‚ โ”‚
โ”‚ 355 โ”‚ You can see example usages in test_utils.py. โ”‚
โ”‚ 356 โ”‚ """ โ”‚
โ”‚ 357 โ”‚ โ”‚
โ”‚ โฑ 358 โ”‚ client = httpx.AsyncClient() โ”‚
โ”‚ 359 โ”‚ โ”‚
โ”‚ 360 โ”‚ class Method(str, Enum): โ”‚
โ”‚ 361 โ”‚ โ”‚ """ โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/httpx/_client.py:1397 in init โ”‚
โ”‚ โ”‚
โ”‚ 1394 โ”‚ โ”‚ allow_env_proxies = trust_env and app is None and transport is None โ”‚
โ”‚ 1395 โ”‚ โ”‚ proxy_map = self._get_proxy_map(proxies, allow_env_proxies) โ”‚
โ”‚ 1396 โ”‚ โ”‚ โ”‚
โ”‚ โฑ 1397 โ”‚ โ”‚ self._transport = self._init_transport( โ”‚
โ”‚ 1398 โ”‚ โ”‚ โ”‚ verify=verify, โ”‚
โ”‚ 1399 โ”‚ โ”‚ โ”‚ cert=cert, โ”‚
โ”‚ 1400 โ”‚ โ”‚ โ”‚ http1=http1, โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/httpx/_client.py:1445 in _init_transport โ”‚
โ”‚ โ”‚
โ”‚ 1442 โ”‚ โ”‚ if app is not None: โ”‚
โ”‚ 1443 โ”‚ โ”‚ โ”‚ return ASGITransport(app=app) โ”‚
โ”‚ 1444 โ”‚ โ”‚ โ”‚
โ”‚ โฑ 1445 โ”‚ โ”‚ return AsyncHTTPTransport( โ”‚
โ”‚ 1446 โ”‚ โ”‚ โ”‚ verify=verify, โ”‚
โ”‚ 1447 โ”‚ โ”‚ โ”‚ cert=cert, โ”‚
โ”‚ 1448 โ”‚ โ”‚ โ”‚ http1=http1, โ”‚
โ”‚ โ”‚
โ”‚ /usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py:275 in init โ”‚
โ”‚ โ”‚
โ”‚ 272 โ”‚ โ”‚ ssl_context = create_ssl_context(verify=verify, cert=cert, trust_env=trust_env) โ”‚
โ”‚ 273 โ”‚ โ”‚ โ”‚
โ”‚ 274 โ”‚ โ”‚ if proxy is None: โ”‚
โ”‚ โฑ 275 โ”‚ โ”‚ โ”‚ self._pool = httpcore.AsyncConnectionPool( โ”‚
โ”‚ 276 โ”‚ โ”‚ โ”‚ โ”‚ ssl_context=ssl_context, โ”‚
โ”‚ 277 โ”‚ โ”‚ โ”‚ โ”‚ max_connections=limits.max_connections, โ”‚
โ”‚ 278 โ”‚ โ”‚ โ”‚ โ”‚ max_keepalive_connections=limits.max_keepalive_connections, โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
TypeError: AsyncConnectionPool.init() got an unexpected keyword argument 'socket_options'

NameError: name 'os' is not defined on MixPro Model

๐Ÿ‘ Utility script imported.
๐Ÿฉน Applying Colab memory fix...
env: LD_PRELOAD=libtcmalloc.so
๐Ÿ”ผ Selected the latest version of the web UI.
๐ŸŒŸ Installing stable-diffusion-webui...
๐Ÿ”ผ Installing the latest versions of the extensions.
๐Ÿ“ฆ Installing 12 extensions...
  โ”” aspect-ratio-preset
  โ”” batchlinks
  โ”” cutoff
  โ”” dynamic-thresholding
  โ”” images-browser
  โ”” latent-couple-two-shot
  โ”” session-organizer
  โ”” state
  โ”” tagcomplete
  โ”” tiled-multidiffusion-upscaler
  โ”” tokenizer
  โ”” tunnels
๐Ÿ”ง Fetching configs...
๐Ÿ’‰ Fetching embeddings...
๐Ÿฉน Applying web UI Colab patches...
๐Ÿ“ฆ Installing aria2...
โฌ Downloading MIX-Pro-V4.safetensors to /content/stable-diffusion-webui/models/Stable-diffusion...
โฌ Downloading kl-f8-anime2.ckpt to /content/stable-diffusion-webui/models/VAE...
/content/stable-diffusion-webui
Python 3.10.11 (main, Apr  5 2023, 14:15:10) [GCC 9.4.0]
Version: v1.3.0
Commit hash: 20ae71faa8ef035c31aa3a410b707d792c8203a3
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning Taming Transformers into /content/stable-diffusion-webui/repositories/taming-transformers...
Cloning K-diffusion into /content/stable-diffusion-webui/repositories/k-diffusion...
Cloning CodeFormer into /content/stable-diffusion-webui/repositories/CodeFormer...
Cloning BLIP into /content/stable-diffusion-webui/repositories/BLIP...
Installing requirements for CodeFormer
Installing requirements
Installing pycloudflared



Traceback (most recent call last):
  File "/content/stable-diffusion-webui/launch.py", line 40, in <module>
    main()
  File "/content/stable-diffusion-webui/launch.py", line 30, in main
    os.system(f"""sed -i -e "s/dict()))$/dict())).cuda()/gm" /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py""")
NameError: name 'os' is not defined

Cannot use "provide your own model" successfully with Taiyi model

Try to use IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1 (https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1/blob/main/model.ckpt) and IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 (https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1) models with "provide your own" colab notebook but fail.

IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1:
Model link: https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/resolve/main/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt
VAE link: https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/resolve/main/vae/diffusion_pytorch_model.bin

Messages when launching web UI:
Launching Web UI with arguments: --xformers --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --opt-channelslast --gradio-queue --ckpt /content/models/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt --vae-path /content/stable-diffusion-webui/models/VAE/diffusion_pytorch_model.bin --ckpt-dir /content/models --share
/usr/local/lib/python3.9/dist-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Loading weights [None] from /content/models/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (โ€ฆ)olve/main/vocab.json: 100% 961k/961k [00:00<00:00, 1.14MB/s]
Downloading (โ€ฆ)olve/main/merges.txt: 100% 525k/525k [00:00<00:00, 24.4MB/s]
Downloading (โ€ฆ)cial_tokens_map.json: 100% 389/389 [00:00<00:00, 140kB/s]
Downloading (โ€ฆ)okenizer_config.json: 100% 905/905 [00:00<00:00, 285kB/s]
Downloading (โ€ฆ)lve/main/config.json: 100% 4.52k/4.52k [00:00<00:00, 1.67MB/s]
Downloading pytorch_model.bin: 100% 1.71G/1.71G [00:19<00:00, 86.8MB/s]
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/webui.py", line 136, in initialize
modules.sd_models.load_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 436, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 277, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
size mismatch for cond_stage_model.transformer.text_model.embeddings.position_ids: copying a param with shape torch.Size([1, 512]) from checkpoint, the shape in current model is torch.Size([1, 77]).

Stable diffusion model failed to load, exiting

Lora error?

Hello. I'm having an issue with loras. I managed to ignore lora block weight error all this time, but yeah a character lora can't be ignored. I'm getting this error, loras aren't working. I use 'provide_your_own_models"

BatchLinks Downloads finished!

reading lora /content/stable-diffusion-webui/models/Lora/lala_satalin_deviluke_v1.safetensors: AssertionError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 83, in init
self.metadata = sd_models.read_metadata_from_safetensors(filename)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 230, in read_metadata_from_safetensors
assert metadata_len > 2 and json_start in (b'{"', b"{'"), f"{filename} is not a safetensors file"
AssertionError: /content/stable-diffusion-webui/models/Lora/lala_satalin_deviluke_v1.safetensors is not a safetensors file

Total progress: 0it [00:00, ?it/s]loading Lora /content/stable-diffusion-webui/models/Lora/lala_satalin_deviluke_v1.safetensors: SafetensorError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 253, in load_loras
lora = load_lora(name, lora_on_disk)
File "/content/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 162, in load_lora
sd = sd_models.read_state_dict(lora_on_disk.filename)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 250, in read_state_dict
pl_sd = safetensors.torch.load_file(checkpoint_file, device=device)
File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 259, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

*** Error running process_batch: /content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/scripts.py", line 490, in process_batch
script.process_batch(p, *script_args, **kwargs)
File "/content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 378, in process_batch
if not self.isnet: loradealer(self, o_prompts ,self.lratios,self.elementals)
File "/content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 694, in loradealer
te,unet = multidealer(te,unet)
File "/content/stable-diffusion-webui/extensions/sd-webui-lora-block-weight/scripts/lora_block_weight.py", line 775, in multidealer
return float(t),float(u)
ValueError: could not convert string to float: 'face'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.