Giter Site home page Giter Site logo

ultimate-upscale-for-automatic1111's Introduction

Ultimate SD Upscale extension for AUTOMATIC1111 Stable Diffusion web UI

Now you have the opportunity to use a large denoise (0.3-0.5) and not spawn many artifacts. Works on any video card, since you can use a 512x512 tile size and the image will converge.

News channel: https://t.me/usdunews

Instructions

All instructions can be found on the project's wiki.

Refs

https://github.com/ssitu/ComfyUI_UltimateSDUpscale - Implementation for ComfyUI

Examples

More on wiki page

E1 Original image

Original

2k upscaled. Tile size: 512, Padding: 32, Mask blur: 16, Denoise: 0.4 2k upscale

E2 Original image

Original

2k upscaled. Tile size: 768, Padding: 55, Mask blur: 20, Denoise: 0.35 2k upscale

4k upscaled. Tile size: 768, Padding: 55, Mask blur: 20, Denoise: 0.35 4k upscale

E3 Original image

Original

4k upscaled. Tile size: 768, Padding: 55, Mask blur: 20, Denoise: 0.4 4k upscale

API Usage

{
"script_name" : "ultimate sd upscale",
"script_args" : [
	null, // _ (not used)
	512, // tile_width
	512, // tile_height
	8, // mask_blur
	32, // padding
	64, // seams_fix_width
	0.35, // seams_fix_denoise
	32, // seams_fix_padding
	0, // upscaler_index
	true, // save_upscaled_image a.k.a Upscaled
	0, // redraw_mode
	false, // save_seams_fix_image a.k.a Seams fix
	8, // seams_fix_mask_blur
	0, // seams_fix_type
	0, // target_size_type
	2048, // custom_width
	2048, // custom_height
	2 // custom_scale
]
}

upscaler_index

Value
0 None
1 Lanczos
2 Nearest
3 ESRGAN_4x
4 LDSR
5 R-ESRGAN_4x+
6 R-ESRGAN 4x+ Anime6B
7 ScuNET GAN
8 ScuNET PSNR
9 SwinIR 4x

redraw_mode

Value
0 Linear
1 Chess
2 None

seams_fix_mask_blur

Value
0 None
1 BAND_PASS
2 HALF_TILE
3 HALF_TILE_PLUS_INTERSECTIONS

seams_fix_type

Value
0 None
1 Band pass
2 Half tile offset pass
3 Half tile offset pass + intersections

seams_fix_type

Value
0 From img2img2 settings
1 Custom size
2 Scale from image size

ultimate-upscale-for-automatic1111's People

Contributors

airjen avatar coyote-a avatar danamir avatar emmortal451 avatar imanrep avatar jopezzia avatar rkfg avatar wywywywy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ultimate-upscale-for-automatic1111's Issues

Has no attribute 'flatten'

Hello,
i get a error :) How to fix that?

-upscale.py", line 513, in run
init_img = images.flatten(init_img, opts.img2img_background_color)
AttributeError: module 'modules.images' has no attribute 'flatten'

Feature request: cache upscaler result

If there is already a way to do this, I apologize but I couldn't figure it out.

Sometimes with very large images, the upscaler can take a significant amount of time, some upscalers are not fast - swinIR and most especially LDSR - and I have to run the ultimate upscale multiple times with different denoise values to get the best final output. It would save a lot of time if I could just run the upscaler once and then do the tile generation multiple times with different denoise values. Is there already a way to do this?

Better positioning of tiles

I did just see two tiles overlapping like 80% so guess that they could be placed with equal space to optimize the usage of them and minimize edge artifacts.

ZeroDivisionError when using the script in batch mode

I love this script, have been using it successfully for a while now. But today I tried running it in the batch tab. What's interesting is it works, but it does throw this error in the console:

ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in call
await super().call(scope, receive, send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in call
await self.middleware_stack(scope, receive, send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in call
await responder(scope, receive, send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in call
await self.app(scope, receive, self.send_with_gzip)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in call
await route.handle(scope, receive, send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\AUTOMATIC1111\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\AUTOMATIC1111\stable-diffusion-webui\modules\progress.py", line 73, in progressapi
progress += 1 / shared.state.job_count * shared.state.sampling_step / shared.state.sampling_steps
ZeroDivisionError: division by zero

After that the web ui stops updating, but the job still runs and completes successfully.

.

.

[BUG]

Hi, after new update of A1111 this issue appear :

Traceback (most recent call last):
File "I:\Github\new_webui\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "I:\Github\new_webui\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\modules\img2img.py", line 146, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "I:\Github\new_webui\stable-diffusion-webui\modules\scripts.py", line 347, in run
processed = script.run(p, *script_args)
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 543, in run
upscaler.process()
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 129, in process
self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 236, in start
return self.linear_process(p, image, rows, cols)
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 171, in linear_process
processed = processing.process_images(p)
File "I:\Github\new_webui\stable-diffusion-webui\modules\processing.py", line 485, in process_images
res = process_images_inner(p)
File "I:\Github\new_webui\stable-diffusion-webui\modules\processing.py", line 667, in process_images_inner
image = apply_overlay(image, p.paste_to, i, p.overlay_images)
File "I:\Github\new_webui\stable-diffusion-webui\modules\processing.py", line 68, in apply_overlay
image = images.resize_image(1, image, w, h)
File "I:\Github\new_webui\stable-diffusion-webui\modules\images.py", line 278, in resize_image
resized = resize(im, src_w, src_h)
File "I:\Github\new_webui\stable-diffusion-webui\modules\images.py", line 261, in resize
im = upscaler.scaler.upscale(im, scale, upscaler.data_path)
File "I:\Github\new_webui\stable-diffusion-webui\modules\upscaler.py", line 64, in upscale
img = self.do_upscale(img, selected_model)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model.py", line 154, in do_upscale
img = esrgan_upscale(model, img)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model.py", line 225, in esrgan_upscale
output = upscale_without_tiling(model, tile)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model.py", line 204, in upscale_without_tiling
output = model(img)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model_arch.py", line 61, in forward
return self.model(feat)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 204, in forward
input = module(input)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 256, 256] to have 3 channels, but got 4 channels instead

Windows
Firefox
Commit hash: 6cff4401824299a983c8e13424018efc347b4a2b

Feature request: Tile size with & height (useful for non 1:1 ratios)

Could you enable a width and height selector for the tile size instead of a unique dimension ?

When I upscale images with a ratio different than 1:1 in the standard SD Upscale script I usually select a tall or wide tile to minimize the images generated and thus the seams. Sometimes I set the exact destination width/height if my vram is big enough.

For example if I have a 1280x720 image that I want rendered to a 2560x1440 (ie. for a wallpaper), I prefer to have tiles of 896x1440 to have only 3 vertical tiles in the final image. (NB: to calculate the width I use (2560 / 3)+64 = 853,33 where 3 is the number of tiles and 64 the padding, and I then select the next width multiple of 64). Or I could do 2 tiles of 1344x1440.

How to prevent small people from being generated in some squares?

Using the recommended settings and 0.35 denoise my images almost always get small versions of the character being generated all over the background. Am I doing something wrong? Reducing denoise to around 0.15 fixes that but then its almost no different from normal upscaling.

Request for the Addition of 4K Support Option

We propose the inclusion of an option that allows for 4K support within the extension. When selected, this option would search for and replace occurrences of 2048 with 4096 in the ui-config.json file, followed by a restart of the UI or reload config somehow. If the option is subsequently deselected, the original configurations would be restored. Implementing this feature would greatly enhance the user experience

It is not possible to output both Upscaled and Seams fix at the same time using SwinIR 4x.

I found that when using SwinIR, if I try to output both the Upscaled and Seams fix images at the same time, the following error occurs and only the Upscaled image is output. Is this a bug? Because I am using LSDR as the upscaler and I don't have this problem.

Error completing request39:13,  3.75it/s]
Arguments: ('task(q71xh4bp1dd9w7q)', 0, 'highest quality,masterpiece,hand of Guido Daniele,blue_hair, 1girl, (rem_\\(re:zero\\)),solo,maid, short_hair, detached_sleeves, roswaal_mansion_maid_uniform, white_legwear, skirt,beautiful sunset,falling petals,upper body,85mm lens,8K,HD,(family friendly),Smile,CG,outside roswaal_mansion,x hairpin,welcome,(pink lips),beautiful ribbon,bangs cover the right eye,,water blue eyes,(holding a super cute bird:1.25),', 'nsfw,ugly,bad_anatomy,bad_hands,extra_hands,missing_fingers,broken hand,(more than two hands),well proportioned hands,more than two legs,unclear eyes,pull skirt,missing_arms,mutilated,extra limbs,extra legs,cloned face,fused fingers,extra_digit, fewer_digits,extra_digits,jpeg_artifacts,signature,watermark,username,blurry,large_breasts,worst_quality,low_quality,normal_quality,mirror image, Vague,unclear fingers,bad hands', [], <PIL.Image.Image image mode=RGBA size=1536x2048 at 0x25E400AF5B0>, None, None, None, None, None, None, 70, 0, 4, 0, 1, False, False, 1, 1, 12, 0.4, -1.0, -1.0, 0, 0, 0, False, 2048, 1536, 0, 0, 32, 0, '', '', '', [], 10, 0, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, None, '', 0.2, 0.1, 1, 1, False, True, True, False, False, False, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 8, 32, 64, 0.35, 32, 8, False, 0, True, 8, 1, 2, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\Users\sus00\Desktop\AI\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\sus00\Desktop\AI\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Users\sus00\Desktop\AI\modules\img2img.py", line 166, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\Users\sus00\Desktop\AI\modules\scripts.py", line 376, in run
    processed = script.run(p, *script_args)
  File "C:\Users\sus00\Desktop\AI\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 543, in run
    upscaler.process()
  File "C:\Users\sus00\Desktop\AI\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 136, in process
    self.image = self.seams_fix.start(self.p, self.image, self.rows, self.cols)
  File "C:\Users\sus00\Desktop\AI\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 401, in start
    return self.band_pass_process(p, image, rows, cols)
  File "C:\Users\sus00\Desktop\AI\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 373, in band_pass_process
    processed = processing.process_images(p)
  File "C:\Users\sus00\Desktop\AI\modules\processing.py", line 484, in process_images
    res = process_images_inner(p)
  File "C:\Users\sus00\Desktop\AI\modules\processing.py", line 577, in process_images_inner
    p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
  File "C:\Users\sus00\Desktop\AI\modules\processing.py", line 946, in init
    mask = mask.crop(crop_region)
  File "C:\Users\sus00\Desktop\AI\venv\lib\site-packages\PIL\Image.py", line 1228, in crop
    raise ValueError(msg)
ValueError: Coordinate 'right' is less than 'left'

AttributeError: module 'modules.images' has no attribute 'flatten'

First attempt at this and I'm getting error: AttributeError: module 'modules.images' has no attribute 'flatten'

Below that it says: Time taken: 0.03s Torch active/reserved: 0/0 MiB, Sys VRAM: 2478/12288 MiB (20.17%)

Not experienced with troubleshooting auto111 or the extensions so if there's a way to provide additional info please let me know.

Edit: this is the output in the cmd line window:

Arguments: (0, '', '', 'None', 'None', <PIL.Image.Image image mode=RGB size=1600x896 at 0x22CB2FB6B90>, None, None, None, None, 0, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, False, 32, 0, '', '', 9, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', True, False, False, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 1, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\Users\Colton\stable-diffusion-webui-master\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "C:\Users\Colton\stable-diffusion-webui-master\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "C:\Users\Colton\stable-diffusion-webui-master\modules\img2img.py", line 150, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\Users\Colton\stable-diffusion-webui-master\modules\scripts.py", line 328, in run
    processed = script.run(p, *script_args)
  File "C:\Users\Colton\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 524, in run
    init_img = images.flatten(init_img, opts.img2img_background_color)
AttributeError: module 'modules.images' has no attribute 'flatten'```

[Feature Request] Support inpainting

It would be great if the script supported the inpaint mask so only some parts of the image were affected by the img2img changes. For example to apply the process to the subject (with a somewhat high denoising) but ignore the background.

Is there anyway to get this as a custom node for ComfyUI?

Assuming this is SD upscale script in imgTOimg of automatic1111, I use this all the time to upscale my images.
It works beautifully, denoise set to 0.35, tile overlap set to 96, 4x_foolhardy_Remacri set as the upscaller.....
Recently switched to ComfyUI, but can not for the life of me get as clear upscales with that as I can with your script.
Is there anyway your script could be converted into a custom node?

Feature request: Ask "how many tiles" instead of "how big are the tiles"

Hello, was looking for a feature (even back then when I was using the native SD upscale) that uses the number of grid tiles as the input instead of the size of the tiles

Say I have a 1024 x1024 image, I want it to be chopped up into a 2x2 grid.

In order for me to do that, I'd have to do the math on my end, divide the image dimensions by 2 which would equal 512 then put that on the tile size.

So maybe instead of the user doing the math, the extension has a dropdown between asking tile size or tile amount, when tile amount is selected it would show two number fields

Tiles along width:
Tiles along height:

so if the user types

Tiles along width: 2
Tiles along height: 2

the image would be split into a 2x2 grid.

What am I doing results are terrible non related

First of all it gives a warning in cmd

Warning: Bad ui setting value: customscript/ultimate-upscale.py/img2img/Type/value: Linear; Default value "None" will be used instead.

Here my settings and the results

image

results

image

Impossible to get rid of seams

Hello, I tried every options and I cant get rid of the seams on my image maybe im doing something wrong but I read the FAQ and im stiill not able to fix this.

I'm running on a AMD 6800xt GPU.

Here is my img2img tab :
image

here is the result if I try with img2img denoising at 1 to test the tiles :

image

and here you can see the kind of tiles I get (this was 512*512 tiles)

image

Help would be appreciated I really love this plugin.

Add checkboxes to "Use Seed From Image" and "Use Prompt from image" for ideal batch processing

First of all, thanks for this extension -- it's amazing!!

One of the things that would really make this extension complete involves upgrades that would allow for effective batch processing, which is currently extremely limited.

Don't know how complicated this is on your end, but it would be really great for batch processing to be able to override the seed in the webui to be used for upscaling by pulling directly from the image metadata. This would allow each image to get the seed that was used originally to generate the image, regardless of the order in which is provided (for instance, a folder with 10 selected images out of a run of 200).

Currently, the batch processing will use the same seed for every image, which works great for one image but the rest will have some deviance (unwanted) from the original generation, polluting the upscaling results.

Additionally, I often use dynamic prompts for my image generation, so if the seed could be used to pull the correct prompt for that seed from what is in the webui, that would be ideal (so that you can make modified versions of the dynamic prompt for upscaling if desired) -- though I suspect that this is fairly complicated from a code standpoint since it involves interaction with another extension. As a fallback, a checkbox that would pull the prompt that was used for the image from the metadata.

Together, these two upgrades would allow you to execute full batch runs with dynamic prompts and arbitrary seeds with a perfect match for upscaling purposes.

Thanks for your consideration!

Upscaling by factor 1

Some upscalers do useful work even if the scale factor is 1, for example: 1x_ReFocus_V3_140000_G, or in general the Deblurring models in https://upscale.wiki/wiki/Model_Database.

Could the script be updated so that it will run the upscaler even if the scale factor is 1? I think even with this feature, it is possible to recovery the original behavior by using None as the upscaler. Otherwise, maybe this behavior can be enabled with a checkbox.

[Feature Request] Support ControlNet

Please add support for ControlNet so that the currently upscaled part of the image takes the corresponding part of the ControlNet annotators into account.

Bug Report / offset pass + Intersections / UnboundLocalError: local variable 'processed' referenced before assignment

When using the Fix Seams option and setting the type to Half-type offset pass + Intersections, and only for this type, I get this error message near the end of the image synthesis process:
UnboundLocalError: local variable 'processed' referenced before assignment

The image result is not shown on the WebUI (just the little error icon from Gradio), but there is one image (and only one) saved on the hard disk. It's probably the upscaled picture in its initial state, before the Seams Fix could be applied.

Here is the complete log from the Command Window:

Error completing request:05,  2.13it/s]
Arguments: (0, '', '', 'None', 'None', <PIL.Image.Image image mode=RGBA size=1024x512 at 0x18C336ADBA0>, None, None, None, None, None, None, 30, 15, 4, 0, 1, False, False, 1, 1, 7, 0.35, -1.0, -1.0, 0, 0, 0, False, 768, 768, 3, 0, 32, 0, '', '', 18, 0, 0, 0, 0, 0.25, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, False, False, False, 0, -1, True, '<p style="margin-bottom:0.75em">You can edit extensions/model-keyword/model-keyword-user.txt to add custom mappings</p>', 'keyword prompt', 'keyword1, keyword2', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, False, False, False, False, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'Refresh models', False, 'Denoised', 5.0, 0.0, False, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', True, False, False, False, 0, True, 384, 384, False, 2, True, True, False, False, True, False, 'C:\\stable-diffusion-webui-master\\extensions\\sd-webui-riffusion\\outputs', 'Refresh Inline Audio (Last Batch)', None, None, None, None, None, None, None, None, False, 4.0, '', 10.0, False, False, True, 30.0, True, False, False, 0, 0.0, 10.0, True, 30.0, True, '', False, False, False, False, 'Auto', 0.5, 1, 0, 0, 384, 384, False, False, True, True, True, False, True, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, '{inspiration}', None, 0, 1, 384, 384, True, False, True, True, True, False, 1, True, 3, False, 3, False, 3, 1, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 1024, 20, 56, 64, 0.35, 32, 7, True, 0, True, 8, 3, 2, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\stable-diffusion-webui-master\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "C:\stable-diffusion-webui-master\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "C:\stable-diffusion-webui-master\modules\img2img.py", line 146, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\stable-diffusion-webui-master\modules\scripts.py", line 337, in run
    processed = script.run(p, *script_args)
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 533, in run
    upscaler.process()
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 134, in process
    self.image = self.seams_fix.start(self.p, self.image, self.rows, self.cols)
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 396, in start
    return self.half_tile_process_corners(p, image, rows, cols)
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 334, in half_tile_process_corners
    self.initial_info = processed.infotext(p, 0)
UnboundLocalError: local variable 'processed' referenced before assignment

Issue with USD upscale in Automatic 1111 after update

Hello. Yesterday I updated Automatic 1111, problems started. Render upscale with working settings: Denoise 31-35, Type: Half tile offset pass, Denoise:0.3, Mask blur: 16, Padding: 32. The results are strange, as if denoise were 0.7. Upscaler ESRGAN_4X. I am attaching original image 512x512 and after upscale 2048x2048 (Model hash: 47ae8a99c5, Denoising strength: 0.31, Mask blur: 4, Ultimate SD upscale upscaler: ESRGAN_4x, Ultimate SD upscale tile_size: 512, Ultimate SD upscale mask_blur: 16, Ultimate SD upscale padding: 32)
06889-447280548-(Pale Elf) (tall) ((sexy)) ((mutant cyborg)) ((((dark goth)))) girl with big brests, ((cybernetic implants)), (((pale skin))), (
01056-675113696-(Pale Elf) (tall) ((sexy)) ((mutant cyborg)) ((((dark goth)))) girl with big brests, ((cybernetic implants)), (((pale skin))), (

[BUG] UU its broken

Hi, maybe I did something wrong, and I have this issue each time :

Traceback (most recent call last):
File "I:\Github\new_webui\stable-diffusion-webui\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "I:\Github\new_webui\stable-diffusion-webui\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\modules\img2img.py", line 150, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "I:\Github\new_webui\stable-diffusion-webui\modules\scripts.py", line 337, in run
processed = script.run(p, *script_args)
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 492, in run
upscaler.process()
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 123, in process
self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 231, in start
return self.chess_process(p, image, rows, cols)
File "I:\Github\new_webui\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 201, in chess_process
processed = processing.process_images(p)
File "I:\Github\new_webui\stable-diffusion-webui\modules\processing.py", line 479, in process_images
res = process_images_inner(p)
File "I:\Github\new_webui\stable-diffusion-webui\modules\processing.py", line 645, in process_images_inner
image = apply_overlay(image, p.paste_to, i, p.overlay_images)
File "I:\Github\new_webui\stable-diffusion-webui\modules\processing.py", line 68, in apply_overlay
image = images.resize_image(1, image, w, h)
File "I:\Github\new_webui\stable-diffusion-webui\modules\images.py", line 278, in resize_image
resized = resize(im, src_w, src_h)
File "I:\Github\new_webui\stable-diffusion-webui\modules\images.py", line 261, in resize
im = upscaler.scaler.upscale(im, scale, upscaler.data_path)
File "I:\Github\new_webui\stable-diffusion-webui\modules\upscaler.py", line 64, in upscale
img = self.do_upscale(img, selected_model)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model.py", line 154, in do_upscale
img = esrgan_upscale(model, img)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model.py", line 225, in esrgan_upscale
output = upscale_without_tiling(model, tile)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model.py", line 204, in upscale_without_tiling
output = model(img)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\modules\esrgan_model_arch.py", line 61, in forward
return self.model(feat)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 139, in forward
input = module(input)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "I:\Github\new_webui\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead

Feature Request: Asymmetric Tiling

Even if the source picture is tiling seamlessly, once upscaled with Ultimate-Upscale the seams becomes visible.

Currently, if the seamlessly tiling source image is upscaled with the Asymmetric Tiling extension turned ON the results are worse. It seems to be highlighting the edge instead of hiding it.

With Asymmetric Tiling turned OFF, the seam becomes very visible on the upscaled result anyways, but it's not as bad.

Since you are already using tiles to split the source image, and that you are already looking at neighbouring tiles to influence the seamless continuity between them, can you somehow take the leftmost tiles and repeat them on the right side, and do the opposite, right-to-left, for the rightmost tiles ? And then apply the same principle on the vertical Y axis (top-tiles-to-bottom + bottom-tiles-to-top) if the user has selected that option ?

Maybe it's easier to manage if you create your own Tile-in-X and Tile-in-Y checkboxes, and not bother with connecting with the existing Asymmetrical Tiling extension ?

One use for very high resolution images is to create immersive 360 panoramas like this one I made with your extension:

https://renderstuff.com/tools/360-panorama-web-viewer-sharing/?image=https://i.imgur.com/qfKL5hO.jpg&title=SD-to-360%20cave%20prototype

But for now I have to edit the seam manually to hide the fact it is very visible after the upscale. I do that in photoshop, and it's ok, but it would be much easier to do that directly during the image upscale process.

Is there any recommended settings template to remove seams/blocks?

I've been working with an image for a while, including multiple upscales... now the image is huge ~2000x8000px, but i realised that if I look closely, it's full of tiny grids/seams maybe about 260 pixels apart. I think this might have been from using bad Upscale padding/other settings early on while experimenting, and never noticing it until now.

I think the general recommendation here is to do an img-2-img at like 0.1 noise. However, I don't have enough VRAM for this size.

One possibility is to run the web-ui in cpu-only mode to do a CPU img-2-img at native resolution, but I've heard that CPU is both slow (not that big of an issue) but also very bad quality?

So, I was wondering if anyone knows if there are good settings/ratios of settings that essentially accomplish the same thing?

So far, I have tried all the built-in seams-fix options at maxed mask blur values where applicable, but all kind of shifts the blockiness around instead of removing them.

Example chunks of my "blocky" images:

blocky

blocky2

Absolutely awesome. How does this work?

I effectively got better results from SD Upscale, and definitely getting less artifacts. Also, I absolutely like having more control on final resolution, seam correction method, etc.
How does this technically work? In how it's different from SD Upscale? Is it somewhat more aware of the contents of the whole image? I noticed it requires less tiles, so i guess the algorithm should be more "content aware", and at the same time faster.
And lastly, how can you tell me the difference between types "Linear, Chess, None" in the upscsaler settings?

I'm usually working with 768x768 tiles, 1536x1536 final image (2x upscale), 0.3-0.4 denoising strength, DPM++ SDE (sometimes Karras), 20-25 steps, ERSGAN_4x and I get very good results.

Thanks for your work!

RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead

Have not been able to get this script to work at all. Everything I try with whatever settings I try to get work fails with this error. Have the most up to date version of the webui.

Traceback (most recent call last):
  File "D:\Git\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\Git\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\Git\stable-diffusion-webui\modules\img2img.py", line 146, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "D:\Git\stable-diffusion-webui\modules\scripts.py", line 337, in run
    processed = script.run(p, *script_args)
  File "D:\Git\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 543, in run
    upscaler.process()
  File "D:\Git\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 129, in process
    self.image = self.redraw.start(self.p, self.image, self.rows, self.cols)
  File "D:\Git\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 238, in start
    return self.chess_process(p, image, rows, cols)
  File "D:\Git\stable-diffusion-webui\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 208, in chess_process
    processed = processing.process_images(p)
  File "D:\Git\stable-diffusion-webui\modules\processing.py", line 476, in process_images
    res = process_images_inner(p)
  File "D:\Git\stable-diffusion-webui\modules\processing.py", line 654, in process_images_inner
    image = apply_overlay(image, p.paste_to, i, p.overlay_images)
  File "D:\Git\stable-diffusion-webui\modules\processing.py", line 68, in apply_overlay
    image = images.resize_image(1, image, w, h)
  File "D:\Git\stable-diffusion-webui\modules\images.py", line 278, in resize_image
    resized = resize(im, src_w, src_h)
  File "D:\Git\stable-diffusion-webui\modules\images.py", line 261, in resize
    im = upscaler.scaler.upscale(im, scale, upscaler.data_path)
  File "D:\Git\stable-diffusion-webui\modules\upscaler.py", line 64, in upscale
    img = self.do_upscale(img, selected_model)
  File "D:\Git\stable-diffusion-webui\modules\esrgan_model.py", line 154, in do_upscale
    img = esrgan_upscale(model, img)
  File "D:\Git\stable-diffusion-webui\modules\esrgan_model.py", line 225, in esrgan_upscale
    output = upscale_without_tiling(model, tile)
  File "D:\Git\stable-diffusion-webui\modules\esrgan_model.py", line 204, in upscale_without_tiling
    output = model(img)
  File "D:\Git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Git\stable-diffusion-webui\modules\esrgan_model_arch.py", line 61, in forward
    return self.model(feat)
  File "D:\Git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 139, in forward
    input = module(input)
  File "D:\Git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Git\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 174, in lora_Conv2d_forward
    return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
  File "D:\Git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "D:\Git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 192, 192] to have 3 channels, but got 4 channels instead

GeneratorBuilder Attribute error?

File "C:\Users\User\stable-diffusion-webui\extensions\sd-dynamic-prompts\scripts\dynamic_prompting.py", line 323, in process.set_unlink_seed_from_prompt(unlink_seed_from_promp)
AttributeError: 'GeneratorBuilder' object has no attribute 'set_unlink_seed_from_prompt'

Running into this error no matter what settings I use.

Tiles missing randomly

Hi,

im having this issue sometime where one or more tiles are generated empty. not always, and not always the same tiles

image

Bug Report / AttributeError: 'USDUSeamsFix' object has no attribute '_width' / ValueError: Coordinate 'right' is less than 'left'

When I'm upscaling a 4k picture to 8k everything works as it should.

When I'm trying to upscale the same 4k picture to 8k but with "seams fix" it runs for a long time but then it fails at near the end (over 80% completed but I do not have an exact number).

On the A1111 WebUI the error message printed is: AttributeError: 'USDUSeamsFix' object has no attribute '_width'
I get the same error whether or not I check the "Seams Fix" checkbox in the save options at the very bottom.

This happens with the seams fix type set to "Band Pass".
This does NOT happen when the seams fix type is set to "Half-Tile offset pass".

My guess is there is an extra underscore just before "width" on line 356 of ultimate-upscale.py as there is no "_width" anywhere else in the code, but plenty of "width". I will try to modify the code by changing line 356 from
p.width = self._width + self.padding * 2 into p.width = self.width + self.padding * 2 and I'll check if it fixes the problem. And of course I'll come back here to report. If this works locally, I will try my first ever pull request (I have no idea what I'm doing !!!).

The detailed log from the WebUI command window is:

Arguments: (0, '', '', 'None', 'None', <PIL.Image.Image image mode=RGBA size=4096x2048 at 0x25B6AF9FCD0>, None, None, None, None, None, None, 30, 15, 4, 0, 1, False, False, 1, 1, 7, 0.35, -1.0, -1.0, 0, 0, 0, False, 768, 768, 3, 0, 32, 0, '', '', 18, 0, 0, 0, 0, 0.25, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, False, True, False, 0, -1, True, '<p style="margin-bottom:0.75em">You can edit extensions/model-keyword/model-keyword-user.txt to add custom mappings</p>', 'keyword prompt', 'keyword1, keyword2', True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, False, False, False, False, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'LoRA', 'None', 1, 'Refresh models', False, 'Denoised', 5.0, 0.0, False, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', True, False, False, False, 0, True, 384, 384, False, 2, True, True, False, False, True, False, 'C:\\stable-diffusion-webui-master\\extensions\\sd-webui-riffusion\\outputs', 'Refresh Inline Audio (Last Batch)', None, None, None, None, None, None, None, None, False, 4.0, '', 10.0, False, False, True, 30.0, True, False, False, 0, 0.0, 10.0, True, 30.0, True, '', False, False, False, False, 'Auto', 0.5, 1, 0, 0, 384, 384, False, False, True, True, True, False, True, 1, False, False, 2.5, 4, 0, False, 0, 1, False, False, 'u2net', False, False, False, '{inspiration}', None, 0, 1, 384, 384, True, False, True, True, True, False, 1, True, 3, False, 3, False, 3, 1, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 768, 20, 55, 64, 0.35, 32, 7, True, 0, True, 8, 1, 2, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "C:\stable-diffusion-webui-master\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "C:\stable-diffusion-webui-master\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "C:\stable-diffusion-webui-master\modules\img2img.py", line 146, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "C:\stable-diffusion-webui-master\modules\scripts.py", line 337, in run
    processed = script.run(p, *script_args)
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 533, in run
    upscaler.process()
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 134, in process
    self.image = self.seams_fix.start(self.p, self.image, self.rows, self.cols)
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 392, in start
    return self.band_pass_process(p, image, rows, cols)
  File "C:\stable-diffusion-webui-master\extensions\ultimate-upscale-for-automatic1111\scripts\ultimate-upscale.py", line 356, in band_pass_process
    p.width = self._width + self.padding * 2
AttributeError: 'USDUSeamsFix' object has no attribute '_width'

No Upscaling Model loaded

When using your script the results are very blurry, no matter what upscaling model I use. I saw that when selecting LDSR there is no model loaded and there is also no LDSR rendering happening. It does work as intended however when I use the regular SD Upscale script.
Here are the settings used:
Steps: 150, Sampler: DPM adaptive, CFG scale: 11, Seed: 1147910821, Size: 4112x2313, Model hash: 0aecbcfa2c, Model: dreamlikeDiffusion10_10, Denoising strength: 0.2, Mask blur: 4, Ultimate SD upscale upscaler: 4x_foolhardy_Remacri, Ultimate SD upscale tile_size: 768, Ultimate SD upscale mask_blur: 16, Ultimate SD upscale padding: 96
And here is a comparison between original and upscaled:
Original
Upscaled

Install Error

Newest Version of Everything, but Ultimate Upscale is not possible to install.
I do this via Load From - Search - Install Button in the Web-UI

GitCommandError: Cmd('git') failed due to: exit code(128) cmdline: git clone -v -- https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git C:\Users\xxxxxxx\stable-diffusion-webui\tmp\ultimate-upscale-for-automatic1111 stderr: 'fatal: destination path 'C:\Users\xxxxxxx\stable-diffusion-webui\tmp\ultimate-upscale-for-automatic1111' already exists and is not an empty directory. '

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.