Giter Site home page Giter Site logo

scraed / characteristicguidancewebui Goto Github PK

View Code? Open in Web Editor NEW
58.0 2.0 5.0 718 KB

Provide large guidance scale correction for Stable Diffusion web UI (AUTOMATIC1111), implementing the paper "Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance Scale"

Home Page: https://scraed.github.io/CharacteristicGuidance/

License: Apache License 2.0

Python 100.00%

characteristicguidancewebui's Introduction

Characteristic Guidance Web UI (enhanced sampling for high CFG scale)

About

Characteristic Guidance Web UI is an extension of for the Stable Diffusion web UI (AUTOMATIC1111). It offers a theory-backed guidance sampling method with improved sample and control quality at high CFG scale (10-30).

This is the official implementation of Characteristic Guidance: Non-linear Correction for Diffusion Model at Large Guidance Scale. We are happy to announce that this work has been accepted by ICML 2024.

Features

Characteristic guidance offers improved sample generation and control at high CFG scale. Try characteristic guidance for

  • Detail refinement
  • Fixing quality issues, like
    • Weird colors and styles
    • Bad anatomy (not guaranteed 🤣, works better on Stable Diffusion XL)
    • Strange backgrounds

Characteristic guidance is compatible with every existing sampling methods in Stable Diffusion WebUI. It now have preliminary support for ControlNet. 1girl running mountain grass newspaper news english 1girl, handstand, sports, close_up StrawberryPancake 1girl, kimono

For more information and previews, please visit our project website: Characteristic Guidance Project Website.

Q&A: What's the difference with Dynamical Thresholding?

They are distinct and independent methods, can be used either independently or in conjunction.

  • Characteristic Guidance: Corrects both context and color, works at the given CFG scale, iteratively corrects input of the U-net according to the Fokker-Planck equation.
  • Dynamical Thresholding: Mainly focusing on color, works to mimic lower CFG scales, clips and rescales output of the U-net.

Using Characteristic Guidance and Dynamical Thresholding simutaneously may further reduce saturation.

1girl_handstand_sportswear_gym

Prerequisites

Before installing and using the Characteristic Guidance Web UI, ensure that you have the following prerequisites met:

  • Stable Diffusion WebUI (AUTOMATIC1111): Your system must have the Stable Diffusion WebUI by AUTOMATIC1111 installed. This interface is the foundation on which the Characteristic Guidance Web UI operates.
  • Version Requirement: The extension is developed for Stable Diffusion WebUI v1.6.0 or higher. It may works for previous versions but not guaranteed.

Installation

Follow these steps to install the Characteristic Guidance Web UI extension:

  1. Navigate to the "Extensions" tab in the Stable Diffusion web UI.
  2. In the "Extensions" tab, select the "Install from URL" option.
  3. Enter the URL https://github.com/scraed/CharacteristicGuidanceWebUI.git into the "URL for extension's git repository" field.
  4. Click on the "Install" button.
  5. After waiting for several seconds, a confirmation message should appear indicating successful installation: "Installed into stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI. Use the Installed tab to restart".
  6. Proceed to the "Installed" tab. Here, click "Check for updates", followed by "Apply and restart UI" for the changes to take effect. Note: Use these buttons for future updates to the CharacteristicGuidanceWebUI as well.

Usage

The Characteristic Guidance Web UI features an interactive interface for both txt2img and img2img mode. Gradio UI for CharacteristicGuidanceWebUI

The characteristic guidance is slow compared to classifier-free guidance. We recommend the user to generate image with classifier-free guidance at first, then try characteristic guidance with the same prompt and seed to enhance the image.

Activation

  • Enable Checkbox: Toggles the activation of the Characteristic Guidance features.

Visualization and Testing

  • Check Convergence Button: Allows users to test and visualize the convergence of their settings. Adjust the regularization parameters if the convergence is not satisfactory.

In practice, convergence is not always guaranteed. If characteristic guidance fails to converge at a certain time step, classifier-free guidance will be adopted at that time step.

Below are the parameters you can adjust to customize the behavior of the guidance correction:

Basic Parameters

  • Regularization Strength: Range 0.0 to 10.0 (default: 1). Adjusts the strength of regularization at the beginning of sampling, larger regularization means easier convergence and closer alignment with CFG (Classifier Free Guidance).
  • Regularization Range Over Time: Range 0.01 to 10.0 (default: 1). Modifies the range of time being regularized, larger time means slow decay in regularization strength hence more time steps being regularized, affecting convergence difficulty and the extent of correction.
  • Max Num. Characteristic Iteration: Range 1 to 50 (default: 50). Determines the maximum number of characteristic iterations per sampling time step.
  • Num. Basis for Correction: Range 0 to 10 (default: 0). Sets the number of bases for correction, influencing the amount of correction and convergence behavior. More basis means better quality but harder convergence. Basis number = 0 means batch-wise correction, > 0 means channel-wise correction.
  • CHG Start Step: Range 0 to 0.25 (default: 0). Characteristic guidance begins to influence the process from the specified percentage of steps, indicated by CHG Start Step.
  • CHG End Step: Range 0.25 to 1 (default: 0). Characteristic guidance ceases to have an effect from the specified percentage of steps, denoted by CHG End Step. Setting this value to approximately 0.4 can significantly speed up the generation process without substantially altering the outcome.
  • ControlNet Compatible Mode
    • More Prompt: Controlnet is turned off when iteratively solving characteristic guidance correction.
    • More ControlNet: Controlnet is turned on when iteratively solving characteristic guidance correction.

Advanced Parameters

  • Reuse Correction of Previous Iteration: Range 0.0 to 1.0 (default: 1.0). Controls the reuse of correction from previous iterations to reduce abrupt changes during generation.
  • Log 10 Tolerance for Iteration Convergence: Range -6 to -2 (default: -4). Adjusts the tolerance for iteration convergence, trading off between speed and image quality.
  • Iteration Step Size: Range 0 to 1 (default: 1.0). Sets the step size for each iteration, affecting the speed of convergence.
  • Regularization Annealing Speed: Range 0.0 to 1.0 (default: 0.4). How fast the regularization strength decay to desired rate. Smaller value potentially easing convergence.
  • Regularization Annealing Strength: Range 0.0 to 5 (default: 0.5). Determines the how important regularization annealing is in characteristic guidance interations. Higher value means higher priority to bring regularization level to specified regularization strength. Affecting the balance between annealing and convergence.
  • AA Iteration Memory Size: Range 1 to 10 (default: 2). Specifies the memory size for AA (Anderson Acceleration) iterations, influencing convergence speed and stability.

Please experiment with different settings, especially regularization strength and time range, to achieve better convergence for your specific use case. (According to my experience, high CFG scale need relatively large regularization strength and time range for convergence, while low CFG scale prefers lower regularization strength and time range for more guidance correction.)

How to Set Parameters (Preliminary Guide)

Here is my recommended approach for parameter setting:

  1. Start by running characteristic guidance with the default parameters (Use Regularization Strength=5 for Stable Diffusion XL).
  2. Verify convergence by clicking the Check Convergence button.
  3. If convergence is achieved easily:
    • Decrease the Regularization Strength and Regularization Range Over Time to enhance correction.
    • If the Regularization Strength is already minimal, consider increasing the Num. Basis for Correction for improved performance.
  4. If convergence is not reached:
    • Increment the Max Num. Characteristic Iteration to allow for additional iterations.
    • Should convergence still not occur, raise the Regularization Strength and Regularization Range Over Time for increased regularization.

Updates

February 3, 2024: New parameters accelerating the generation.

  • Thanks to @v0xie: The UI now supports two more parameters.
  • CHG Start Step: Range 0 to 0.25 (default: 0). Characteristic guidance begins to influence the process from the specified percentage of steps, indicated by CHG Start Step.
  • CHG End Step: Range 0.25 to 1 (default: 0). Characteristic guidance ceases to have an effect from the specified percentage of steps, denoted by CHG End Step. Setting this value to approximately 0.4 can significantly speed up the generation process without substantially altering the outcome.

January 28, 2024: Modify how parameter Reuse Correction of Previous Iteration works

  • Effect: Move parameter Reuse Correction of Previous Iteration to advanced parameters. Its default value is set to 1 to accelerate convergence. It is now using the same update direction as the case Reuse Correction of Previous Iteration = 0 regardless of its value.
  • User Action Required: Please delete "ui-config.json" from the stable diffusion WebUI root directory for the update to take effect.
  • Issue: Infotext with Reuse Correction of Previous Iteration > 0 may not generate the same image as previous version.

January 28, 2024: Allow Num. Basis for Correction = 0

  • Effect: Now the Num. Basis for Correction can takes value 0 which means batch-wise correction instead of channel-wise correction. It is a more suitable default value since it converges faster.
  • User Action Required: Please delete "ui-config.json" from the stable diffusion WebUI root directory for the update to take effect.

January 14, 2024: Bug fix: allow prompts with more than 75 tokens

  • Effect: Now the extension still works if the prompt have more than 75 tokens.

January 13, 2024: Add support for V-Prediction model

  • Effect: Now the extension supports models trained in V-prediction mode.

January 12, 2024: Add support for 'AND' prompt combination

  • Effect: Now the extension supports the 'AND' word in positive prompt.
  • Current Limitations: Note that characteristic guidance only give correction between positive and negative prompt. Therefore positive prompts combined by 'AND' will be averaged when computing the correction.

January 8, 2024: Improved Guidance Settings

  • Extended Settings Range: Regularization Strength & Regularization Range Over Time can now go up to 10.
  • Effect: Reproduce classifier-free guidance results at high values of Regularization Strength & Regularization Range Over Time.
  • User Action Required: Please delete "ui-config.json" from the stable diffusion WebUI root directory for the update to take effect.

January 6, 2024: Integration of ControlNet

  • Early Support: We're excited to announce preliminary support for ControlNet.
  • Current Limitations: As this is an early stage, expect some developmental issues. The integration of ControlNet and characteristic guidance remains a scientific open problem (which I am investigating). Known issues include:
    • Iterations failing to converge when ControlNet is in reference mode.

January 3, 2024: UI Enhancement for Infotext

  • Thanks to @w-e-w: The UI now supports infotext reading.
  • How to Use: Check out this PR for detailed instructions.

Compatibility and Issues

Citation

If you utilize characteristic guidance in your research or projects, please consider citing our paper:

@misc{zheng2023characteristic,
      title={Characteristic Guidance: Non-linear Correction for DDPM at Large Guidance Scale},
      author={Candi Zheng and Yuan Lan},
      year={2023},
      eprint={2312.07586},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

characteristicguidancewebui's People

Contributors

scraed avatar w-e-w avatar v0xie avatar ka-de avatar

Stargazers

Andrey Danilov avatar Uday avatar John D. Pope avatar Krtolica Vujadin avatar Linch_97 avatar LuneZ99 avatar  avatar Blake Senftner avatar tomato avatar yssn avatar  avatar  avatar  avatar  avatar  avatar Jean-Philippe Deblonde avatar futurewizard avatar  avatar hqsrawmelon avatar  avatar WodoWiesel avatar Daniel avatar Huy Linh Nguyen avatar  avatar sandner.art | avatar Shiux avatar Andrew Tischenko avatar Antarctic_Singularity avatar  avatar Matthew avatar Charlie Cortial avatar leanAI avatar shadow avatar magejosh avatar syddharth avatar  avatar  avatar  avatar Shiimizu avatar  avatar  avatar  avatar ZRGX avatar Jeff Carpenter avatar  avatar  avatar smithart avatar  avatar I.N.K.Y avatar Kapital avatar  avatar yangyangzhao avatar  avatar KD_LG avatar  avatar  avatar  avatar  avatar

Watchers

 avatar Kostas Georgiou avatar

characteristicguidancewebui's Issues

Add Webui forge support?

When I'm using webui, I really like how it boosts the image quality.
But, when I switch over to webui forge, I run into some issues. Compared to webui, forge seems to use the VRAM better and speeds up the generation process. It'd be great if you guys could add support for webui forge.

Please add support for "AND" word.

While using default "AND" word or "AND_SALT" word by Neutral Prompt, it errors "RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0". It won't happen while turns CharacteristicGuidance off.

Error while using the plugin

The plugin works occasionally, but most of the time I get the following error in the console:

Traceback (most recent call last): File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "F:\SD\Data\Packages\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 528, in update_plot fig, axs = plt.subplots(len(res), 1, figsize=(10, 4 * len(res))) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\pyplot.py", line 1599, in subplots axs = fig.subplots(nrows=nrows, ncols=ncols, sharex=sharex, sharey=sharey, File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\figure.py", line 930, in subplots gs = self.add_gridspec(nrows, ncols, figure=self, **gridspec_kw) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\figure.py", line 1542, in add_gridspec gs = GridSpec(nrows=nrows, ncols=ncols, figure=self, **kwargs) File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\gridspec.py", line 378, in __init__ super().__init__(nrows, ncols, File "F:\SD\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\matplotlib\gridspec.py", line 48, in __init__ raise ValueError( ValueError: Number of rows must be a positive integer, not 0

Also, this is the info when I start my instance of webui:

`Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
Check mmcv version...
Your mmcv version 2.0.1 may not work mmyolo.
Please install mmcv version 2.0.0 manually or uninstall mmcv and restart UI again to install mmcv 2.0.0
Check mmengine version...
your mmengine version is 0.8.5
Launching Web UI with arguments: --medvram-sdxl --xformers --api --skip-python-version-check --listen --enable-insecure-extension-access
Style database not found: F:\SD\Data\Packages\stable-diffusion-webui\styles.csv
[-] ADetailer initialized. version: 23.12.0, num models: 20
CivitAI Browser+: Aria2 RPC started
ControlNet preprocessor location: F:\SD\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-01-09 08:54:14,952 - ControlNet - INFO - ControlNet v1.1.425
2024-01-09 08:54:15,233 - ControlNet - INFO - ControlNet v1.1.425
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [838643492f] from F:\SD\Data\Packages\stable-diffusion-webui\models\Stable-diffusion\15\aniverse_v16Pruned.safetensors
Creating model from config: F:\SD\Data\Packages\stable-diffusion-webui\configs\v1-inference.yaml
Total 8 mmdet, 7 yolo and 3 mediapipe models.
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\face_yolov8n.pth!
You can enable model validity tester in the Settings-> μ DDetailer.
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\face_yolov8s.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\hand_yolov8n.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\hand_yolov8s.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\bbox\mmdet_anime-face_yolov3.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\segm\mmdet_dd-person_mask2former.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\segm\yolov5_ins_n.pth!
SUCCESS - success to load config for F:\SD\Data\Packages\stable-diffusion-webui\models\mmdet\segm\yolov5_ins_s.pth!
Total 8 valid mmdet configs are found.
You can disable validity tester in the Settings-> μ DDetailer.
Check config files...
Done
2024-01-09 08:54:19,590 - AnimateDiff - INFO - Injecting LCM to UI.
loading.. cn_modole

  • ControlNet extension sd-webui-controlnet found
  • default µ DDetailer model= bbox/face_yolov8n.pth [882ebbb6]
    2024-01-09 08:54:20,151 - AnimateDiff - INFO - Hacking i2i-batch.
  • default µ DDetailer model= bbox/face_yolov8n.pth [882ebbb6]
    Running on local URL: http://0.0.0.0:7860
    Loading VAE weights specified in settings: F:\SD\Data\Packages\stable-diffusion-webui\models\VAE\15\vae-ft-mse-840000-ema-pruned.safetensors

To create a public link, set share=True in launch().
Startup time: 46.0s (prepare environment: 11.3s, import torch: 5.4s, import gradio: 2.3s, setup paths: 3.0s, initialize shared: 0.3s, other imports: 2.1s, setup codeformer: 0.4s, list SD models: 0.7s, load scripts: 4.0s, scripts before_ui_callback: 4.0s, create ui: 4.6s, gradio launch: 6.8s, add APIs: 0.1s, app_started_callback: 1.1s).
Applying attention optimization: xformers... done.
Model loaded in 18.5s (load weights from disk: 0.7s, create model: 0.8s, apply weights to model: 12.7s, load VAE: 1.9s, move model to device: 0.5s, load textual inversion embeddings: 0.7s, calculate empty prompt: 1.0s).`

Can you help me out with this, please?
Thank you!

ValueError: too many values to unpack (expected 2)

The extension seems to work on SDXL and plain SD1.5 versions, but seems to have some issue with 1.5 models that are trained with VPRED (an uncommon, but very useful feature). At least that's the only common denominator I've found while testing it on various models.

Characteristic Guidance injecting the CFGDenoiser
Characteristic Guidance sampling:
0%| | 0/20 [00:00<?, ?it/s]
Characteristic Guidance recorded iterations info for 0 steps
Characteristic Guidance recovering the CFGDenoiser
*** Error completing request
*** Arguments: ('task(e7npzgzvnog4o3q)', 'ww', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x0000024E27FC1300>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, 1, 1, 30, 1, 0, -4, 1, 0.4, 0.5, 2, True, 'How to set parameters? Check our github!', 'More ControlNet', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, True, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, False, -1, -1, 0, '1,1', 'Horizontal', '', 2, 1, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, True, 3, 4, 0.15, 0.3, 'bicubic', 0.5, 2, True, False, False, 0.75, 1, 1, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "C:\SDV_17\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\SDV_17\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "C:\SDV_17\stable-diffusion-webui\modules\processing.py", line 734, in process_images
res = process_images_inner(p)
File "C:\SDV_17\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 770, in wrapper
raise e
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 766, in wrapper
result = sample(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength,
File "C:\SDV_17\stable-diffusion-webui\modules\processing.py", line 1142, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\SDV_17\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\SDV_17\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "C:\SDV_17\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\SDV_17\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\SDV_17\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 128, in forward
x_out = self.Chara_iteration(None, x_in, sigma_in, uncond, cond_scale, conds_list,
File "C:\SDV_17\stable-diffusion-webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 226, in Chara_iteration
c_out, c_in = [utils.append_dims(x, x_in.ndim) for x in self.inner_model.get_scalings(sigma_in)]
ValueError: too many values to unpack (expected 2)


Script active by default?

I have did not even touch the extension yet suddenly i get 'UnboundLocalError: local variable 'h' referenced before assignment' in img2img when trying to use adetailer or udetailer skipping the img2img part.

I have multiple CFG modifying extensions/Scripts and none of those have had issues with each other and it seems from the logs that this extension is the cause.

    Traceback (most recent call last):
      File "D:\stable-diffusion-webui\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\stable-diffusion-webui\webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\stable-diffusion-webui\webui\modules\img2img.py", line 238, in img2img
        processed = process_images(p)
      File "D:\stable-diffusion-webui\webui\modules\processing.py", line 734, in process_images
        res = process_images_inner(p)
      File "D:\stable-diffusion-webui\webui\extensions\3-sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\stable-diffusion-webui\webui\modules\processing.py", line 868, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\stable-diffusion-webui\webui\extensions\CharacteristicGuidanceWebUI\scripts\CHGextension.py", line 735, in wrapper
        result = sample(conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength,
      File "D:\stable-diffusion-webui\webui\modules\processing.py", line 1527, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "D:\stable-diffusion-webui\webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\stable-diffusion-webui\webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\stable-diffusion-webui\webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\ishim\AppData\Roaming\Python\Python310\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\stable-diffusion-webui\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 651, in sample_dpmpp_2m_sde
        h_last = h
    UnboundLocalError: local variable 'h' referenced before assignment

Is it important to avoid unconverged steps

Do I want to avoid steps that have greater iterations than the previous step or that are barely/not converged as much as possible for the best quality? Does it matter if the final image has convergence and I'm not worried about time or resource consumption? Or should I be using the Reuse Correction option when I see them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.