seruva19 / kubin Goto Github PK
View Code? Open in Web Editor NEWWeb-GUI for Kandinsky text-to-image diffusion models.
Web-GUI for Kandinsky text-to-image diffusion models.
Update: seems like not an error, but rather unintuitive gradio widget behaviour. Will elaborate later.
Can you add option to upload other model? for some images ESRGAN is not the perfect upscaling model. Thank you!
D:\back\Kandinsky-2\kandinsky\kubin\venv\lib\site-packages\torch\amp\autocast_mode.py:204: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn('User provided device_type of 'cuda', but CUDA is not available. Disabling')
8G vram to run 256×256 display vram is insufficient, using Kandinsky 2.2 diffusers module, also according to the wiki settings optimized, 2.1 can run 768×768
I do these steps for a brand new setup...
git clone https://github.com/seruva19/kubin
cd kubin
install.bat
install-torch.bat
start.bat
Enter a prompt, click Generate. The animated icon starts spinning and the time counts up, but nothing happens. No activity in Task Manager. Verified on a few other PCs so not just a one off problem.
Installed kubin like in this tutorial: https://www.youtube.com/watch?v=CVPxw4UDGr4
Then wanted to download the models, so I ran a text2img prompt, this error comes up:
task queued: text2img
clearing memory
Traceback (most recent call last):
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gradio\blocks.py", line 1021, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\AI\kardinsky\kubin\src\ui_blocks\t2i.py", line 106, in generate
return generate_fn(params)
File "D:\AI\kardinsky\kubin\src\webui.py", line 33, in <lambda>
generate_fn=lambda params: kubin.model.t2i(params),
File "D:\AI\kardinsky\kubin\src\models\model_kd2.py", line 98, in t2i
params = self.prepare("text2img").seed(params)
File "D:\AI\kardinsky\kubin\src\models\model_kd2.py", line 32, in prepare
self.flush()
File "D:\AI\kardinsky\kubin\src\models\model_kd2.py", line 83, in flush
with torch.cuda.device("cuda"):
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\cuda\__init__.py", line 316, in __enter__
self.prev_idx = torch.cuda._exchange_device(self.idx)
File "C:\Users\enter\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\torch\cuda\__init__.py", line 77, in _exchange_device
if device < 0:
TypeError: '<' not supported between instances of 'NoneType' and 'int'
What is also very interesting is, that under the "System"-Tab in Settings it says I have 0 VRAM.
Any help is appreciated!
RuntimeError: Expected is_sm90 || is_sm8x || is_sm75 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Some of the settings are only applied after UI is shown (model is loaded only on first click on "Generate", etc). Therefore, it would be nice to have opportunity to tune app's settings in UI (instead of CLI).
Also it might be crucial for users to store their session settings in a file and load them on app's start. Especially since changing some settings would require restart of whole app.
What should be done:
Not sure what's happening, but when using a negative prompt the image got strange cyan overlay tint like, umm, anime style gradient background or something?
And it is even more noticeable in photos and portraits, and more pronounced with increased CFG guidance scale like 7.0.
DDIM sampler has a bit less effect but it is still noticeable.
Example prompt and negative (same seed) and p-sampler, even though 3d/cg is somewhat conflicting with this example IIRC it is not the problem and it happens in highly stylized prompts too.
It is kind of less buggy with this simple negative test, washed out colors, oversaturated
- and especially on lower CFG like 3.0, but it is clearly got brighter, I dunno maybe it is supposed to be that way?
prompt
beautiful sphere, glossy skin, masterpiece, concept art, acute angle, hdr, sharp focus, forest background
negative
lowres, ((bad anatomy)), bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, black and white, red eyes, big eyes, long neck, picture_frame, cartoon, ((disfigured)), ((bad art)), ((deformed)), ((poorly drawn)), ((extra limbs)), ((b&w)), weird colors, blurry, ((ugly_face)), cg, 3d , 3d render
8G vram to run 256×256 display vram is insufficient, using Kandinsky 2.2 diffusers module, also according to the wiki settings optimized, 2.1 can run 768×768
is it only me or its slowed generating after update? i use 2.1 model and it takes me 3 minutes to generate 1 image 960x960. as far as i remeber i was able to generate even bigger resolutions faster. im talking about mixing mode
Masks that are generated by 'Segmentation' extension need to be passed to 'Inpaint' tab. Currently there is no way to do this.
LoRA training (dev branch) does not work in Google Colab (and Paperspace as well), resulting in CUDA OOM.
I adapted the same scripts featured from "official" notebook, but while this notebook works fine, my GUI adaptation results in VRAM overflow.
There must be something inherently wrong about my code. I need to figure it out... somehow.
Also possibly it is the root of the following problems: #122, #115, #112
This issue is primarily intended for personal purposes and contains a list of features that are planned to be implemented in the future. Btw, anyone can ask for a feature in comments and I will add it to the list and forget about it.
⚡ Top Priority - has the highest priority
✅ Implemented - has been fully implemented and is available in the application
🔨 In Progress - is currently being developed or worked on
🔍 Researching - is in the research phase to gather more information and determine its feasibility
📅 Planned - is planned for implementation in the future but has not been started yet
⛔ Not Planned - is not planned for implementation in the foreseeable future
😔 Too Difficult - is deemed too challenging or technically complex to implement at the moment
🤷 Not Decided - the decision to implement or not implement this feature has not been made yet
💡 Idea - is in the idea or concept stage and has not been fully defined or scoped yet
🐞 Bug - is an error, should be fixed ASAP, why it is here on the first place?!
🚫 Deprecated - has been deprecated and probably won't be implemented
✅ | |
✅ | |
✅ | |
📅 | Add Kandinsky 3.1 #174 |
✅ | |
📅 | Add progress indicator and inference cancellation option ai-forever/Kandinsky-2/pull/69 |
✅ | |
⛔ | |
📅 | Add (auto)saving and recovering session params |
📅 | Add prompt weighting and advanced syntax |
📅 | Remove CLIP limit of 77 tokens #2136 |
✅ | |
✅ | |
📅 | Autofill of params when sending image to another tab |
📅 | Add tools for comparison (XYZ grid) #134 |
⛔ | |
✅ | |
✅ | |
📅 | Pasting image from clipboard to input image placeholders |
✅ | |
✅ |
📅 | Add tools for automating prompt construction #79 |
✅ |
✅ |
⚡ | Add support for mixing several images (>2) #172 |
📅 ⚡ | Add ControlNet inpaint |
📅 | Inpaint with image (instead of prompt) |
🐞 | Inpaint mask position is not correct when input image is resized |
📅 | "Tainted" image after inpainting #23 #100 |
📅 | Support uploading custom mask #16 |
🔍 🐞 | Unobvious image control behaviour #53 |
📅 | Add 'partial' outpainting #70 |
📅 | Add ControlNet outpaint |
⛔ | |
⛔ | |
📅 | Prompt-less outpainting |
✅ | |
⛔ | |
📅 💡 | Add extension for region composition control |
📅 | Add more upscaling models #83 |
📅 | Add sending extracted mask to inpaint tab #46 |
⛔ |
📅 | Add support for thumbnail preview #20 |
🔨 | Add paging |
📅 | Passing recovered metadata to inference tabs |
✅ |
✅ | |
⛔ | |
📅 | Add GUI for 2.2 fine-tuning |
✅ | |
✅ | |
⛔ | |
🚫 | |
⛔ | |
📅 | Add button to stop training |
⛔ | |
🚫 |
Inference might not work with 8 Gb: #2 (but: #14)
Textual inversion training requires > 21-22 Gb source source 2
Dreambooth training requires >= 32 Gb source
edit. but training prior/unclip separately fits into 16 Gb VRAM #74
Integrating K2 into diffusers might open up opportunities for optimization huggingface/diffusers#2985
CUDA out of memory. Tried to allocate 42.00 MiB (GPU 0; 6.00 GiB total capacity; 5.24 GiB already allocated; 0 bytes free; 5.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The kandinsky 2.2 model with diffusers pipeline generates the first image successfully, however it reports an error when generating another image again. Here are my settings:
Enable prior generation on CPU
Enable half precision weights
Enable sliced attention
Enable sequential CPU offload
Enable channels last memory format
Among them, "Enable channels last memory format" is also an error when it is turned off.
The code that reported the error:
task queued: text2img
Traceback (most recent call last):
File "M:\ai\kubin\venv\lib\site-packages\gradio\routes.py", line 439, in run_predict
output = await app.get_blocks().process_api(
File "M:\ai\kubin\venv\lib\site-packages\gradio\blocks.py", line 1384, in process_api
result = await self.call_function(
File "M:\ai\kubin\venv\lib\site-packages\gradio\blocks.py", line 1089, in call_function
prediction = await anyio.to_thread.run_sync(
File "M:\ai\kubin\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "M:\ai\kubin\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "M:\ai\kubin\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "M:\ai\kubin\venv\lib\site-packages\gradio\utils.py", line 700, in wrapper
response = f(*args, **kwargs)
File "M:\ai\kubin\src\ui_blocks\t2i.py", line 240, in generate
return generate_fn(params)
File "M:\ai\kubin\src\webui.py", line 34, in
generate_fn=lambda params: kubin.model.t2i(params),
File "M:\ai\kubin\src\models\model_diffusers22\model_22.py", line 120, in t2i
prior, decoder = self.prepareModel("text2img")
File "M:\ai\kubin\src\models\model_diffusers22\model_22.py", line 76, in prepareModel
prior, decoder = prepare_weights_for_task(self, task)
File "M:\ai\kubin\src\models\model_diffusers22\model_22_utils.py", line 137, in prepare_weights_for_task
to_device(model.params, current_prior, current_decoder)
File "M:\ai\kubin\src\models\model_diffusers22\model_22_utils.py", line 252, in to_device
prior.to(prior_device)
File "M:\ai\kubin\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 682, in to
module.to(torch_device, torch_dtype)
File "M:\ai\kubin\venv\lib\site-packages\transformers\modeling_utils.py", line 1902, in to
return super().to(*args, **kwargs)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "M:\ai\kubin\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
When I run the interface activation command in powershell, I get the message "Running on local URL: http://0.0.0.0:7860", which is not active, but the wiki lists the correct page address (http://127.0.0.1:7860). I think this should be corrected so as not to mislead some people.
On Windows 11 using PowerShell, I am receiving:
ModuleNotFoundError: No module named 'blip.models'
after running off main
It seems the PR last night that was merged, #118, updates clip-interrogator
to 0.6.0
In order to run ./start.bat
I had to re-install clip-interrogator
at version 0.5.0
pip install clip-interrogator==0.5.0
kubin now works as expected after downgrading the package
Add option to save images to GDrive, if you get disconnected you loose all images :(
optimizations: xformers for prior;sequential CPU offloading for prior;xformers for decoder;sequential CPU offloading for decoder;attention slicing for decoder: slice_size=max
seed generated: 54955802784
0% 0/25 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 439, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1384, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1089, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 700, in wrapper
response = f(*args, **kwargs)
File "/content/kubin/src/ui_blocks/t2i.py", line 240, in generate
return generate_fn(params)
File "/content/kubin/src/webui.py", line 34, in
generate_fn=lambda params: kubin.model.t2i(params),
File "/content/kubin/src/models/model_diffusers22/model_22.py", line 116, in t2i
return self.t2i_cnet(params)
File "/content/kubin/src/models/model_diffusers22/model_22.py", line 457, in t2i_cnet
image_embeds, zero_embeds = prior(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py", line 494, in call
predicted_image_embedding = self.prior(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/prior_transformer.py", line 346, in forward
hidden_states = block(hidden_states, attention_mask=attention_mask)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention.py", line 154, in forward
attn_output = self.attn1(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 321, in forward
return self.processor(
File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 1046, in call
hidden_states = xformers.ops.memory_efficient_attention(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 192, in memory_efficient_attention
return _memory_efficient_attention(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 290, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/init.py", line 308, in _memory_efficient_attention_forward
_ensure_op_supports_or_raise(ValueError, "memory_efficient_attention", op, inp)
File "/usr/local/lib/python3.10/dist-packages/xformers/ops/fmha/dispatch.py", line 45, in _ensure_op_supports_or_raise
raise exc_type(
ValueError: Operator memory_efficient_attention
does not support inputs:
query : shape=(64, 81, 1, 64) (torch.float16)
key : shape=(64, 81, 1, 64) (torch.float16)
value : shape=(64, 81, 1, 64) (torch.float16)
attn_bias : <class 'torch.Tensor'>
p : 0.0
flshattF
is not supported because:
attn_bias type is <class 'torch.Tensor'>
Link to template: https://runpod.io/gsc?template=hwwh1x6mhq&ref=vfker49t
All requirements preinstalled for all extensions.
Note for now model is not included (working to find correct model location)
I have found other people's trained fine-tuning models decoder_fp16.ckpt and prior_fp16.ckpt elsewhere. Is there a corresponding tool that can be converted into a model that can be used by Diffusers?
name 'KandinskyPriorPipeline' is not defined
Traceback (most recent call last):
File "M:\ai\kubin\venv\lib\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "M:\ai\kubin\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "M:\ai\kubin\venv\lib\site-packages\gradio\blocks.py", line 1021, in call_function
prediction = await anyio.to_thread.run_sync(
File "M:\ai\kubin\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "M:\ai\kubin\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "M:\ai\kubin\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "M:\ai\kubin\src\ui_blocks\t2i.py", line 105, in generate
return generate_fn(params)
File "M:\ai\kubin\src\webui.py", line 32, in
generate_fn=lambda params: kubin.model.t2i(params),
File "M:\ai\kubin\src\models\model_diffusers.py", line 214, in t2i
params = self.prepare("text2img").seed(params)
File "M:\ai\kubin\src\models\model_diffusers.py", line 56, in prepare
self.pipe_prior = KandinskyPriorPipeline.from_pretrained(
NameError: name 'KandinskyPriorPipeline' is not defined
Thank you very much, the problem has been solved.
(venv) PS L:\Kandinsky-2\kubin> py src/kubin.py --enable-flash-attention
usage: kubin.py [-h] [--device DEVICE] [--model-version MODEL_VERSION] [--use-flash-attention] [--cache-dir CACHE_DIR]
[--output-dir OUTPUT_DIR] [--task-type TASK_TYPE] [--share SHARE] [--server-name SERVER_NAME]
[--server-port SERVER_PORT] [--concurrency-count CONCURRENCY_COUNT] [--debug] [--locale LOCALE]
[--model-config MODEL_CONFIG]
kubin.py: error: unrecognized arguments: --enable-flash-attention
These wonderful folks have almost finished implementation of K2 in diffusers: https://www.github.com/huggingface/diffusers/pull/3308
That means I don't have to implement this on my own, which is great xD
To add it to app:
Hi,
whenever i send an inpainted image back to inpainting and do another inpainting prompt, the quality of the image gets progessively worse. I have seen people inpainting for hours with stable diffusion with hundreds or thousand iterations and no loss in quality. Is there anything i can do about that? Why is it so different between SD and Kandinsky?
Im getting this error when trying to generate: There is no current event loop in thread 'AnyIO worker thread'.
Currently outpainting processes image in whole size (i.e. input image + offset mask), and while this provides best content coherence, it also makes pretty difficult to avoid CUDA OOM 😕
So it is essential to add mode where only the extended area (partially overlapping with input image) is processed and then merged with source image. This can also give opportunity (in prospect) to develop things like 'infinite canvas' etc.
Step 1: Upon completion of all steps and proceeding to use Gradio:
Running on local URL: http://127.0.0.1:7860/
Running on public URL: https://5267e59a6cc5ceed42.gradio.live/
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy
from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Step 2: The following procedures start to take place:
scanning system information
task queued: text2img
prior/diffusion_pytorch_model.safetensors not found
Loading pipeline components...: 100% 6/6 [00:47<00:00, 7.89s/it]
following pipelines, if active, will be released for text2img task: ['text2img_cnet', 'inpainting']
Downloading (…)ain/unet/config.json: 100% 1.67k/1.67k [00:00<00:00, 7.52MB/s]
Downloading (…)on_pytorch_model.bin: 100% 5.01G/5.01G [02:20<00:00, 35.7MB/s]
Step 3: After downloading and installing the dependencies from the Step 2 (which doesn't always happen, most of the time the process freezes at some step), Gradio throws "connection errored out" and stops working.
(kandinsky) PS W:\kubin> python src/kubin.py
launching with: {'from_config': '', 'device': 'cuda', 'model_version': '2.1', 'use_flash_attention': False, 'cache_dir': 'models', 'output_dir': 'output', 'task_type': 'text2img', 'share': 'none', 'server_name': '127.0.0.1', 'server_port': 7860, 'concurrency_count': 2, 'debug': True, 'locale': 'en-us', 'model_config': 'config.kd2', 'max_mix': 2, 'extensions_path': 'extensions', 'enabled_extensions': None, 'disabled_extensions': None, 'skip_install': False, 'safe_mode': False, 'mock': False, 'pipeline': 'native', 'theme': 'default'}
setting model params
found 6 extensions
1: extension 'kd-image-browser' found
1: extension 'kd-image-browser' successfully registered
2: extension 'kd-interrogator' found
2: extension 'kd-interrogator' has requirements.txt, but was already installed, skipping
2: extension 'kd-interrogator' successfully registered
3: extension 'kd-mesh-gen' found
3: extension 'kd-mesh-gen' has requirements.txt, but was already installed, skipping
3: extension 'kd-mesh-gen' successfully registered
4: extension 'kd-prompt-styles' found
4: extension 'kd-prompt-styles' successfully registered
5: extension 'kd-segmentation' found
5: extension 'kd-segmentation' has requirements.txt, but was already installed, skipping
5: extension 'kd-segmentation' successfully registered
6: extension 'kd-upscaler' found
6: extension 'kd-upscaler' has requirements.txt, but was already installed, skipping
6: extension 'kd-upscaler' successfully registered
Traceback (most recent call last):
File "W:\kubin\src\kubin.py", line 40, in
ui.queue(concurrency_count=kubin.args.concurrency_count, api_open=False).launch(
TypeError: Blocks.launch() got an unexpected keyword argument 'allowed_paths'
Hello, I have just reinstalled a fresh version of the GUI as my one was severely outdated. I keep getting stuck in a download loop where the script seems to terminate the model downloads for the safetensor files before they finish and then trying to redownload them from scratch. I've tried to wipe the models folder a couple of times and retry but no luck.
Is there anyway I can download and place the files manually in the models folder? I noticed the auto download seems to create a lot of folders such as blobs, refs and snapshots. So I was a bit unsure of the paths to place the manually downloaded files in. Some assistance would be much appreciated.
Thank you!
is there any way I can find the prompt and parameters again from images I generated?
hey man,I hope you're well.
can you please modify the sliders from:
steps = gr.Slider(0
to
steps = gr.Slider(1
for the image parameters and for the image width and height
from
width = gr.Slider(1, 1024, 768, step=1,
to
width = gr.Slider(1, 1024, 768, step=16,
or
width = gr.Slider(1, 1024, 768, step=32,
The gallery select event handler may fail to respond to user input.
This issue only occurs when working remotely.
found 10 extensions
1: extension 'fb-styles' found
Traceback (most recent call last):
File "W:\kubin\src\kubin.py", line 35, in
kubin.init_extensions()
File "W:\kubin\src\env.py", line 31, in init_extensions
self.ext_registry.register(self)
File "W:\kubin\src\extension\ext_registry.py", line 59, in register
spec.loader.exec_module(module)
File "", line 879, in exec_module
File "", line 1016, in get_code
File "", line 1073, in get_data
FileNotFoundError: [Errno 2] No such file or directory: 'W:\kubin\extensions/fb-styles/setup.py'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.