Giter Site home page Giter Site logo

scraed / characteristicguidancewebui Goto Github PK

View Code? Open in Web Editor NEW
76.0 3.0 8.0 767 KB

Provide large guidance scale correction for Stable Diffusion web UI (AUTOMATIC1111), implementing the paper "Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance Scale"

Home Page: https://scraed.github.io/CharacteristicGuidance/

License: Apache License 2.0

Python 100.00%

characteristicguidancewebui's Issues

Turbo_dev branch

How exactly do you switch to the turbo branch? I would like to try it out.

Error on Web UI (dev branch)

After updating CharacteristicGuidanceWebUI e450923...c744c99:

*** Error loading script: forge_inject.py
    Traceback (most recent call last):
      File "…\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "…\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 940, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "…\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\forge_inject.py", line 25, in <module>
        from ldm_patched.modules.conds import CONDRegular, CONDCrossAttn
    ModuleNotFoundError: No module named 'ldm_patched'

Add Webui forge support?

When I'm using webui, I really like how it boosts the image quality.
But, when I switch over to webui forge, I run into some issues. Compared to webui, forge seems to use the VRAM better and speeds up the generation process. It'd be great if you guys could add support for webui forge.

Error while using the plugin

The plugin works occasionally, but most of the time I get the following error in the console:

Traceback (most recent call last): File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "F:\SD\Data\Packages\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 528, in update_plot fig, axs = plt.subplots(len(res), 1, figsize=(10, 4 * len(res))) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\pyplot.py", line 1599, in subplots axs = fig.subplots(nrows=nrows, ncols=ncols, sharex=sharex, sharey=sharey, File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\figure.py", line 930, in subplots gs = self.add_gridspec(nrows, ncols, figure=self, **gridspec_kw) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\figure.py", line 1542, in add_gridspec gs = GridSpec(nrows=nrows, ncols=ncols, figure=self, **kwargs) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\gridspec.py", line 378, in __init__ super().__init__(nrows, ncols, File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\gridspec.py", line 48, in __init__ raise ValueError( ValueError: Number of rows must be a positive integer, not 0

Also, this is the info when I start my instance of webui:

`Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
Check mmcv version...
Your mmcv version 2.0.1 may not work mmyolo.
Please install mmcv version 2.0.0 manually or uninstall mmcv and restart UI again to install mmcv 2.0.0
Check mmengine version...
your mmengine version is 0.8.5
Launching Web UI with arguments: --medvram-sdxl --xformers --api --skip-python-version-check --listen --enable-insecure-extension-access
Style database not found: F:\SD\Data\Packages\stable-diffusion-webui\styles.csv
[-] ADetailer initialized. version: 23.12.0, num models: 20
CivitAI Browser+: Aria2 RPC started
ControlNet preprocessor location: F:\SD\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-01-09 08:54:14,952 - ControlNet - INFO - ControlNet v1.1.425
2024-01-09 08:54:15,233 - ControlNet - INFO - ControlNet v1.1.425
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [838643492f] from F:\SD\Data\Packages\stable-diffusion-webui\models\Stable-diffusion\15\aniverse_v16Pruned.safetensors
Creating model from config: F:\SD\Data\Packages\stable-diffusion-webui\configs\v1-inference.yaml
Total 8 mmdet, 7 yolo and 3 mediapipe models.
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\face_yolov8n.pth!
You can enable model validity tester in the Settings-> μ DDetailer.
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\face_yolov8s.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\hand_yolov8n.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\hand_yolov8s.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\mmdet_anime-face_yolov3.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\segm\mmdet_dd-person_mask2former.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\segm\yolov5_ins_n.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\segm\yolov5_ins_s.pth!
Total 8 valid mmdet configs are found.
You can disable validity tester in the Settings-> μ DDetailer.
Check config files...
Done
2024-01-09 08:54:19,590 - AnimateDiff - INFO - Injecting LCM to UI.
loading.. cn_modole

  • ControlNet extension sd-webui-controlnet found
  • default µ DDetailer model= bbox/face_yolov8n.pth [882ebbb6]
    2024-01-09 08:54:20,151 - AnimateDiff - INFO - Hacking i2i-batch.
  • default µ DDetailer model= bbox/face_yolov8n.pth [882ebbb6]
    Running on local URL: http://0.0.0.0:7860
    Loading VAE weights specified in settings: F:\SD\Data\Packages\stable-diffusion-webui\models\VAE\15\vae-ft-mse-840000-ema-pruned.safetensors

To create a public link, set share=True in launch().
Startup time: 46.0s (prepare environment: 11.3s, import torch: 5.4s, import gradio: 2.3s, setup paths: 3.0s, initialize shared: 0.3s, other imports: 2.1s, setup codeformer: 0.4s, list SD models: 0.7s, load scripts: 4.0s, scripts before_ui_callback: 4.0s, create ui: 4.6s, gradio launch: 6.8s, add APIs: 0.1s, app_started_callback: 1.1s).
Applying attention optimization: xformers... done.
Model loaded in 18.5s (load weights from disk: 0.7s, create model: 0.8s, apply weights to model: 12.7s, load VAE: 1.9s, move model to device: 0.5s, load textual inversion embeddings: 0.7s, calculate empty prompt: 1.0s).`

Can you help me out with this, please?
Thank you!

Problem while trying to generate 1024 by 1024 image

Hi, when I try to use this method (on Forge webui) with 1024x1024 resolution (needed by SDXL) it throws this error:

File ".../extensions/CharacteristicGuidanceWebUI/scripts/CharaIte.py", line 361, in compute_correction_direction
dxs_cond_part = torch.cat( [*( [(h - 1) * dxs[:,None,...]]*num_x_in_cond )], axis=1 ).view( (dxs.shape[0]*num_x_in_cond, *dxs.shape[1:]) )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: torch.cat(): expected a non-empty list of Tensors
torch.cat(): expected a non-empty list of Tensors

Which is caused by num_x_in_cond being zero. Any ideas on how to fix that?

Please add support for "AND" word.

While using default "AND" word or "AND_SALT" word by Neutral Prompt, it errors "RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0". It won't happen while turns CharacteristicGuidance off.

[forge] Reported errors when using with unipc sampler.

haracteristic Guidance recorded iterations info for 0 steps Characteristic Guidance recovering the CFGDenoiser Traceback (most recent call last): File "E:\aidraw\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop task.work() File "E:\aidraw\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, **self.kwargs) File "E:\aidraw\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) File "E:\aidraw\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images res = process_images_inner(p) File "E:\aidraw\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 421, in wrapper raise e File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 417, in wrapper result = sample(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, File "E:\aidraw\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_timesteps.py", line 173, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling return func() File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_timesteps.py", line 173, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_timesteps_impl.py", line 135, in unipc x = unipc_sampler.sample(x, steps=len(timesteps), t_start=t_start, skip_type=shared.opts.uni_pc_skip_type, method="multistep", order=shared.opts.uni_pc_order, lower_order_final=shared.opts.uni_pc_lower_order_final) File "E:\aidraw\stable-diffusion-webui-forge\modules\models\diffusion\uni_pc\uni_pc.py", line 760, in sample model_prev_list = [self.model_fn(x, vec_t)] File "E:\aidraw\stable-diffusion-webui-forge\modules\models\diffusion\uni_pc\uni_pc.py", line 455, in model_fn return self.data_prediction_fn(x, t) File "E:\aidraw\stable-diffusion-webui-forge\modules\models\diffusion\uni_pc\uni_pc.py", line 439, in data_prediction_fn noise = self.noise_prediction_fn(x, t) File "E:\aidraw\stable-diffusion-webui-forge\modules\models\diffusion\uni_pc\uni_pc.py", line 433, in noise_prediction_fn return self.model(x, t) File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_timesteps_impl.py", line 124, in model res = self.cfg_model(x, t_input, **self.extra_args) File "E:\aidraw\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "E:\aidraw\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 382, in _call_forward return CHGDenoiser.forward(self, *args, **kwargs) File "<string>", line 36, in forward File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\forge_inject.py", line 263, in forge_sample denoised = sampling_function(self,model, x, timestep, uncond, cond, cond_scale, model_options, seed) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\forge_inject.py", line 193, in sampling_function cond_pred, uncond_pred = calc_cond_uncond_batch(self,model, cond, uncond_, x, timestep, model_options,cond_scale) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\forge_inject.py", line 157, in calc_cond_uncond_batch output = Chara_iteration(self,model,None,input_x,timestep_,cond_scale,uncond[0]['cross_attn'],c).chunk(batch_chunks) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CharaIte.py", line 182, in Chara_iteration dxs_add = chara_ite_inner_loop(self, evaluations, ite_paras) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CharaIte.py", line 222, in chara_ite_inner_loop abt = self.alphas[t_in.long()] RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) indices should be either on cpu or on the same device as the indexed tensor (cpu)

Script active by default?

I have did not even touch the extension yet suddenly i get 'UnboundLocalError: local variable 'h' referenced before assignment' in img2img when trying to use adetailer or udetailer skipping the img2img part.

I have multiple CFG modifying extensions/Scripts and none of those have had issues with each other and it seems from the logs that this extension is the cause.

    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion-webui\webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion-webui\webui\modules\img2img.py", line 238, in img2img
        processed = process_images(p)
      File "D:\stable-diffusion-webui\webui\modules\processing.py", line 734, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion-webui\webui\extensions\3-sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\stable-diffusion-webui\webui\modules\processing.py", line 868, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\stable-diffusion-webui\webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 735, in wrapper
        result = sample(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength,
      File "D:\stable-diffusion-webui\webui\modules\processing.py", line 1527, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "D:\stable-diffusion-webui\webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\stable-diffusion-webui\webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\stable-diffusion-webui\webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\ishim\AppData\Roaming\Python\Python310\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\stable-diffusion-webui\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 651, in sample_dpmpp_2m_sde
        h_last = h
    UnboundLocalError: local variable 'h' referenced before assignment

SDXL problem

Hi, when I try to use this method (on Forge webui) with 1024x1024 resolution (needed by SDXL) it throws this error:

File ".../extensions/CharacteristicGuidanceWebUI/scripts/CharaIte.py", line 361, in compute_correction_direction
dxs_cond_part = torch.cat( [*( [(h - 1) * dxs[:,None,...]]*num_x_in_cond )], axis=1 ).view( (dxs.shape[0]*num_x_in_cond, *dxs.shape[1:]) )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: torch.cat(): expected a non-empty list of Tensors
torch.cat(): expected a non-empty list of Tensors

Which is caused by num_x_in_cond being zero. Any ideas on how to fix that?

[forge] Report errors when working with refiner.

When i enable the refiner, it reported a error. It's similar to the issue that some users encountered while working with the Refiner plugin on the automatic-a1111 webui in the past. Perhaps the solution is also similar, but I'm not sure what to do exactly.

Characteristic Guidance recovering the CFGDenoiser Traceback (most recent call last): File "E:\aidraw\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop task.work() File "E:\aidraw\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work self.result = self.func(*self.args, **self.kwargs) File "E:\aidraw\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) File "E:\aidraw\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images res = process_images_inner(p) File "E:\aidraw\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 421, in wrapper raise e File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 417, in wrapper result = sample(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, File "E:\aidraw\stable-diffusion-webui-forge\modules\processing.py", line 1291, in sample return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts) File "E:\aidraw\stable-diffusion-webui-forge\modules\processing.py", line 1388, in sample_hr_pass samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning) File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 197, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_common.py", line 263, in launch_sampling return func() File "E:\aidraw\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 197, in <lambda> samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "E:\aidraw\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "E:\aidraw\stable-diffusion-webui-forge\ldm_patched\k_diffusion\sampling.py", line 188, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "E:\aidraw\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "E:\aidraw\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 382, in _call_forward return CHGDenoiser.forward(self, *args, **kwargs) File "<string>", line 36, in forward File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\forge_inject.py", line 263, in forge_sample denoised = sampling_function(self,model, x, timestep, uncond, cond, cond_scale, model_options, seed) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\forge_inject.py", line 193, in sampling_function cond_pred, uncond_pred = calc_cond_uncond_batch(self,model, cond, uncond_, x, timestep, model_options,cond_scale) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\forge_inject.py", line 157, in calc_cond_uncond_batch output = Chara_iteration(self,model,None,input_x,timestep_,cond_scale,uncond[0]['cross_attn'],c).chunk(batch_chunks) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CharaIte.py", line 182, in Chara_iteration dxs_add = chara_ite_inner_loop(self, evaluations, ite_paras) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CharaIte.py", line 212, in chara_ite_inner_loop t_in = self.inner_model.sigma_to_t(sigma_in) File "E:\aidraw\stable-diffusion-webui-forge\repositories\k-diffusion\k_diffusion\external.py", line 69, in sigma_to_t dists = log_sigma - self.log_sigmas[:, None] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

ValueError: too many values to unpack (expected 2)

The extension seems to work on SDXL and plain SD1.5 versions, but seems to have some issue with 1.5 models that are trained with VPRED (an uncommon, but very useful feature). At least that's the only common denominator I've found while testing it on various models.

Characteristic Guidance injecting the CFGDenoiser
Characteristic Guidance sampling:
0%| | 0/20 [00:00<?, ?it/s]
Characteristic Guidance recorded iterations info for 0 steps
Characteristic Guidance recovering the CFGDenoiser
*** Error completing request
*** Arguments: ('task(e7npzgzvnog4o3q)', 'ww', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000024E27FC1300>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 1, 1, 30, 1, 0, -4, 1, 0.4, 0.5, 2, True, 'How to set parameters? Check our github!', 'More ControlNet', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, False, -1, -1, 0, '1,1', 'Horizontal', '', 2, 1, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, True, 3, 4, 0.15, 0.3, 'bicubic', 0.5, 2, True, False, False, 0.75, 1, 1, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\SDV_17\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\SDV_17\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "C:\SDV_17\stable-diffusion-webui\modules\processing.py", line 734, in process_images
res = process_images_inner(p)
File "C:\SDV_17\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 770, in wrapper
raise e
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 766, in wrapper
result = sample(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength,
File "C:\SDV_17\stable-diffusion-webui\modules\processing.py", line 1142, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\SDV_17\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\SDV_17\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "C:\SDV_17\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\SDV_17\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\SDV_17\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 128, in forward
x_out = self.Chara_iteration(None, x_in, sigma_in, uncond, cond_scale, conds_list,
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 226, in Chara_iteration
c_out, c_in = [utils.append_dims(x, x_in.ndim) for x in self.inner_model.get_scalings(sigma_in)]
ValueError: too many values to unpack (expected 2)


An exception occurred while Forge was running.

After the update in Forge, this plugin started reporting errors, causing it to be unable to run.

*** Error calling: E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py/ui Traceback (most recent call last): File "E:\aidraw\stable-diffusion-webui-forge\modules\scripts.py", line 545, in wrap_call return func(*args, **kwargs) File "E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 237, in ui with gr.Row(open=True): File "E:\aidraw\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\component_meta.py", line 163, in wrapper return fn(self, **kwargs) TypeError: Row.__init__() got an unexpected keyword argument 'open'

*** Error running process_batch: E:\aidraw\stable-diffusion-webui-forge\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py Traceback (most recent call last): File "E:\aidraw\stable-diffusion-webui-forge\modules\scripts.py", line 884, in process_batch script.process_batch(p, *script_args, **kwargs) TypeError: ExtensionTemplateScript.process_batch() missing 15 required positional arguments: 'reg_ini', 'reg_range', 'ite', 'noise_base', 'chara_decay', 'res', 'lr', 'reg_size', 'reg_w', 'aa_dim', 'checkbox', 'markdown', 'radio', 'start_step', and 'stop_step'

Is it important to avoid unconverged steps

Do I want to avoid steps that have greater iterations than the previous step or that are barely/not converged as much as possible for the best quality? Does it matter if the final image has convergence and I'm not worried about time or resource consumption? Or should I be using the Reuse Correction option when I see them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.