Giter Site home page Giter Site logo

sd_save_intermediate_images's People

Contributors

alulkesh avatar joetech avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

sd_save_intermediate_images's Issues

Could you add an option to save the images in the same folder as the generated images

I need to use this extension because my images otherwise get burned up and I love it, but this extension creates so many unmanageable folders making it very hard to find images or have an overview. I can easily generate a 1000 images in a night, leaving me with a 1000 numbered folders not knowing which picture is in which folder.

I would much prefer to have all my images in the standard TXT2IMG folder or IMG2IMG folder not having to go through a lot of folders to find the image I'm looking for. I tried looking at the script to adjust this myself, but I'm not a coder.

Thanks so much for making this extension, hopefully you can add this functionality.

support for vlad fork...

ie https://github.com/vladmandic/automatic

you can use this addon on it it shows int he extension list, but its completely broken, the previews don't work and neither does the video generating..

For some reason I get bit better performance with this fork than I do with automatic1111 version, but like to use this addon.

Support DDIM, PLMS and UNIPC

@JackeyDeng:

But it does not support DDIM, which file should I modify to support DDIM? I think its almost the same to get DDIM intermediate images, just to decode the intermediate latents?

@AlUlkesh:

When I made that extension, that just wasn't possible with the a1111 infrastructure. You could only get that information from the kdiffusion samplers.

However, I just saw this in the new a1111 release notes:

rework DDIM, PLMS, UniPC to use CFG denoiser same as in k-diffusion samplers

So, perhaps it is possible now. I'll have to look into it.

Inference stops at first save point

When I run it, it stops the generation process:
00849-1144413703-pikachu
00850-3222923025-pikachu
00851-1286797252-pikachu

Console doesn't come back with errors:

 12%|█████████▌                                                                  | 5/40 [00:04<00:31,  1.09it/s]
Total progress:  12%|███████▌                                                    | 5/40 [00:03<00:26,  1.30it/s]

AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'enable_hr

getting this error

AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'enable_hr

full traceback

Error completing request
Arguments: (1, 'taco', '', 'None', 'None', None, {'image': <PIL.Image.Image image mode=RGBA size=512x512 at 0x232FD951D20>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=512x512 at 0x232FD953160>}, None, None, None, 0, 40, 4, 4, 0, 1, False, False, 1, 1, 7, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, 0, 32, 0, '', '', 0, 0, 0, 0, 0, 0.25, True, 'Denoised', 5.0, 0.0, 0, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', True, False, False, False, 0, True, 384, 384, False, 2, True, True, False, False) {}
Traceback (most recent call last):
File "C:\SDauto\stable-diffusion-webui\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "C:\SDauto\stable-diffusion-webui\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "C:\SDauto\stable-diffusion-webui\modules\img2img.py", line 152, in img2img
processed = process_images(p)
File "C:\SDauto\stable-diffusion-webui\modules\processing.py", line 479, in process_images
res = process_images_inner(p)
File "C:\SDauto\stable-diffusion-webui\modules\processing.py", line 608, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "C:\SDauto\stable-diffusion-webui\modules\processing.py", line 989, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "C:\SDauto\stable-diffusion-webui\modules\sd_samplers.py", line 511, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args={
File "C:\SDauto\stable-diffusion-webui\modules\sd_samplers.py", line 440, in launch_sampling
return func()
File "C:\SDauto\stable-diffusion-webui\modules\sd_samplers.py", line 511, in
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args={
File "C:\SDauto\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\SDauto\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 521, in sample_dpmpp_2s_ancestral
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "C:\SDauto\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 100, in callback_state
if p.enable_hr:
AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'enable_hr'

Saves

Hi, great extension :)

Is there any chance to not save image with denoise at N00 just what is set by user?

Right now I am setting N30 but it savings 3 image one with N00 , N30 , and final

'=' alignment error

Got this error:

in callback_state intermed_number = f"{intermed_number:0{digits}}" ValueError: '=' alignment not allowed in string format specifier

fixed it by removing the "0" in the format string in lines 149 and 153 :)

Save to subdir in original output folder

Hello, is there a way to edit the script to have the intermediate directory created inside the output folders that are automatically created in webui (where the original results are saved)? i.e:
outputs/txt2img-images/PROMPT/intermediate_directory

Show saved intermediate images as preview?

So webui has the option to disable showing the preview images it generates, which I disabled not sure it made much of a difference to performance,.. however if this addon is enabled it will save and denoise the steps.. so it would be nice if somehow the preview could be switched to use the saved preview images this creates as it generates in replace of the webui version.. would be a nice feature instead of checking the folder as it generates the images.

img2img inter-images resolution wrong

Hello, I am using StableDiffusion WebUI to do 1X SR, my input image is 512 * 512, but when I save intermediate images, its resolution is 128 * 128, when I click "Also save final image with intermediates" , the last image is 512 * 512, but intermediate images are still 128 * 128, is my settings wrong?
2023-08-31 18-10-14 的屏幕截图

Current webui version incompatible with extension

And just another request.. could you support saving the last known settings chosen for everything in the script.. find myself often having to reclick and enter the same things for all the settings.. hopefully it can be supported.

Suggestions!

Hi,
I love your script, it works .

sorry, wrong way of testing, It works perfectly today . video and inter images all good.
maybe the way I did has something wrong.

no suggestion anymore, it works nicely.

------------------------ you can close this issue.

------------------ -----------GPU RAM occupation
Full ---------------------11G
APPROX NN----------- 5G
APPROX cheap---------- 5G

so my suggestion is :

  1. save image only from Approx NN ,
  2. every image saved resolution = Final size, (if Hires On, FinalSize= Hires size, if HiresOff , FinalSize= base size).

probably This can also help FFmpeg creating low res and hi res together in a single video.
currently , this script can't combine lowres and hires images into a single video, maybe because of not same resolution size, i don't know.

New feature: Make a video file

I have just implemented this feature. It uses ffmpy/ffmpeg. In theory that should work for everyone since those are a dependency of gradio. But if you have problems with this, please let me know.
13655-sample
13642-sample

image

doesn't seem to generate a video...

Calculating sha256 for C:\Projects\StableDiffusion\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors: 235745af8d86bf4a4c1b5b4f529868b37019a10f7c0b2e79ad0abca3a22bc6e1
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:09<00:00, 4.39it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:09<00:00, 4.46it/s]
Total progress: 166it [03:36, 1.30s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:16<00:00, 2.45it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:16<00:00, 2.48it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 82/82 [00:35<00:00, 2.29it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:17<00:00, 2.36it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 41/41 [00:16<00:00, 2.45it/s]
*** Error running postprocess: C:\Projects\StableDiffusion\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py
Traceback (most recent call last):
File "C:\Projects\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 651, in postprocess
script.postprocess(p, processed, *script_args)
File "C:\Projects\StableDiffusion\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 731, in postprocess
make_video(p, ssii_is_active, ssii_final_save, ssii_intermediate_type, ssii_every_n, ssii_start_at_n, ssii_stop_at_n, ssii_mode, ssii_video_format, ssii_mp4_parms, ssii_video_fps, ssii_add_first_frames, ssii_add_last_frames, ssii_smooth, ssii_seconds, ssii_lores, ssii_hires, ssii_ffmpeg_bat, ssii_bat_only, ssii_debug)
File "C:\Projects\StableDiffusion\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 177, in make_video
make_video_or_bat(p, ssii_is_active, ssii_final_save, ssii_intermediate_type, ssii_every_n, ssii_start_at_n, ssii_stop_at_n, ssii_mode, ssii_video_format, ssii_mp4_parms, ssii_video_fps, ssii_add_first_frames, ssii_add_last_frames, ssii_smooth, ssii_seconds, ssii_lores, ssii_hires, ssii_ffmpeg_bat, ssii_bat_only, ssii_debug)
File "C:\Projects\StableDiffusion\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 254, in make_video_or_bat
os.replace(path_name_org, path_name_seq)
FileNotFoundError: [WinError 2] The system cannot find the file specified: 'outputs/txt2img-images\sd_xl_base_1.0\intermediates\00000\00000-041-1818421718.png' -> 'outputs/txt2img-images\sd_xl_base_1.0\intermediates\00000\00000-1061-1818421718.png'


Api need so much

Does this extension provide api? I hope to know how to use it with using during using webui api.
Im trying to search how to use, but there is no info about it.
Thank you for your great work!!

Video generator requests

Glad you added that in was just gonna write a tool to automate making them into videos after doing them manually with ffmpeg..

if you could add an option to have the last remaining image or finaly image hold for x seconds.. that would be good.. actually even an option to have the final image at the start hold before showing the step progress.. just some added options like that thanks

Request - put the incremental numbers at the end of the name string

Hey,

Currently the name pattern puts the incremental numbers for the single images in front of the name. When i import this sequence into let's say DaVinci resolve, then i cannot import it as an image sequence. But each little image gets treaten independently. Means i would need to fix 150 images to have a duration of one frame again instead of 5 seconds.

Could the output be written with a name pattern where the values are at the end of the string? That's the common pattern for image sequences. At the moment it is xxx0001xxxxxxxxxxxxxxxx.png. Needed would be xxxxxxxxxxxxxxxxxxxxxx0001.png

Many thanks for reading and considering.

Kind regards
Reiner

how to speed up the video?

I have no idea how to control the speed of final video?
I want the video faster, but no matter I tried all the paramter just doesn't work for me.
what is the suggestion?

AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'iteration'

As long as you don't have this:
image
checked, you're good.

Bur as soon as you do check it, you will get the below Exception that you can't escape even after unchecking.

Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "E:\AI\stable-diffusion-webui\modules\img2img.py", line 152, in img2img
    processed = process_images(p)
  File "E:\AI\stable-diffusion-webui\modules\processing.py", line 470, in process_images
    res = process_images_inner(p)
  File "E:\AI\stable-diffusion-webui\modules\processing.py", line 575, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "E:\AI\stable-diffusion-webui\modules\processing.py", line 917, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "E:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 501, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args={
  File "E:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 439, in launch_sampling
    return func()
  File "E:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 501, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args={
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "E:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 148, in sample_euler_ancestral
    callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
  File "E:\AI\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 455, in callback_state
    p.intermed_batch_iter = p.iteration
AttributeError: 'StableDiffusionProcessingImg2Img' object has no attribute 'iteration'

In addition, after the above exception, clicking this:
image
will end with this:

Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict
    output = await app.blocks.process_api(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api
    result = await self.call_function(fn_index, inputs, iterator)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "E:\AI\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 74, in ssii_save_settings_do
    ui_settings = ui_setting_set(ui_settings, value, eval(key))
  File "E:\AI\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 51, in ui_setting_set
    this_module = os.path.basename(__file__)
NameError: name '__file__' is not defined

I hope this helps.

SD Update not finding "KDiffusionSampler"

I was able to use this yesterday but after today's update in Stable diffusion, I'm getting some error.

Error loading script: sd_save_intermediate_images.py Traceback (most recent call last): File "G:\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "G:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "G:\stable-diffusion-webui\extensions\sd_save_intermediate_images\scripts\sd_save_intermediate_images.py", line 10, in <module> from modules.sd_samplers import KDiffusionSampler, sample_to_image ImportError: cannot import name 'KDiffusionSampler' from 'modules.sd_samplers' (G:\stable-diffusion-webui\modules\sd_samplers.py)

save only before Sampling, upscaler, roop etc

Thanks for addon, i loving it.

i want to have intermediate backup copies/instances of final results as well

e.g.
after hitting generate, i want to save all these instances / results =
before sampling method &
before upscaler &
before roop.. &
and so on..
then final result

for comparisons , future refrences, post production and ofcourse more control for desirable merges.

implement / suggest workaround?

thank u

Only last set of images kept when Batch count > 1

When I want to save the intermediates of a group of images created using Batch size = 1 and Batch count > 1, only one set of files are created, and each successive image in the batch overwrites the intermediates of the previous one.

Here's a sample batch:
sd_sample_1

and the resulting intermediates: (there was only this folder generated)
sd_sample_2

finally, the debug text:

2023-01-19 01:08:54,795 DEBUG ssii_intermediate_type, ssii_every_n, ssii_stop_at_n, ssii_video, ssii_video_format, ssii_video_fps, ssii_video_hires, ssii_smooth, ssii_seconds, ssii_debug:
2023-01-19 01:08:54,795 DEBUG Denoised, 1.0, 0.0, False, mp4, 2.0, 2, False, 0.0, True
2023-01-19 01:08:54,798 DEBUG Step: 0
2023-01-19 01:08:54,800 DEBUG hr: False
2023-01-19 01:08:54,807 DEBUG ssii_intermediate_type, ssii_every_n, ssii_stop_at_n: Denoised, 1.0, 0.0
2023-01-19 01:08:54,807 DEBUG Step: 0
2023-01-19 01:08:54,810 DEBUG p.intermed_outpath: outputs/txt2img-images\intermediates\01116
2023-01-19 01:08:54,810 DEBUG p.intermed_outpath_suffix:
2023-01-19 01:08:54,811 DEBUG p.steps: 8
2023-01-19 01:08:54,811 DEBUG p.all_seeds: [2518042455, 2518042456]
2023-01-19 01:08:54,811 DEBUG p.cfg_scale: 7
2023-01-19 01:08:54,812 DEBUG p.sampler_name: Euler a
2023-01-19 01:08:54,812 DEBUG p.batch_size: 1

2023-01-19 01:09:00,046 DEBUG filename: 01116-001-2518042455
2023-01-19 01:09:05,352 DEBUG filename: 01116-002-2518042455
2023-01-19 01:09:10,599 DEBUG filename: 01116-003-2518042455
2023-01-19 01:09:15,858 DEBUG filename: 01116-004-2518042455
2023-01-19 01:09:21,119 DEBUG filename: 01116-005-2518042455
2023-01-19 01:09:26,362 DEBUG filename: 01116-006-2518042455
2023-01-19 01:09:31,607 DEBUG filename: 01116-007-2518042455
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [01:01<00:00,  7.63s/it]
2023-01-19 01:09:51,167 DEBUG filename: 01116-001-2518042455
2023-01-19 01:09:56,433 DEBUG filename: 01116-002-2518042455
2023-01-19 01:10:01,754 DEBUG filename: 01116-003-2518042455
2023-01-19 01:10:07,020 DEBUG filename: 01116-004-2518042455
2023-01-19 01:10:12,258 DEBUG filename: 01116-005-2518042455
2023-01-19 01:10:17,487 DEBUG filename: 01116-006-2518042455
2023-01-19 01:10:22,749 DEBUG filename: 01116-007-2518042455
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:42<00:00,  5.26s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 16/16 [01:29<00:00,  5.58s/it]

Filename not properly being passed to ffmpeg

I believe the problem lies around this line.

img_file = intermed_pattern.replace("%%%", "%03d") + ".png"

PS D:\opt\DeepLearning\Voldy\stable-diffusion-webui> ffmpeg -benchmark -framerate 1 -i "outputs\img2img-images\20230119224931_1girl blurry blurry background blurry foreground boots closed\intermediates\00000\00000-%03d-20230119224931_70_7.5_87_120_None_Euler a_135842745_3fd17af06c_2023-01-19_20230119225119.png" -filter_complex "setpts=2.5*PTS [v4]; [v4]minterpolate=fps=3:mi_mode=mci:mc_mode=aobmc:me_mode=bidir:vsbmc=1" "outputs/img2img-images\20230119224931_1girl blurry blurry background blurry foreground boots closed\intermediates\00000\00000-20230119224931_70_7.5_87_120_None_Euler a_135842745_3fd17af06c_2023-01-19_20230119225119.mp4"
ffmpeg version 4.2.3 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 9.3.1 (GCC) 20200523
  configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
[image2 @ 000002c4746bb3c0] Could find no file with path 'outputs\img2img-images\20230119224931_1girl blurry blurry background blurry foreground boots closed\intermediates\00000\00000-%03d-20230119224931_70_7.5_87_120_None_Euler a_135842745_3fd17af06c_2023-01-19_20230119225119.png' and index in the range 0-4
outputs\img2img-images\20230119224931_1girl blurry blurry background blurry foreground boots closed\intermediates\00000\00000-%03d-20230119224931_70_7.5_87_120_None_Euler a_135842745_3fd17af06c_2023-01-19_20230119225119.png: No such file or directory

Where the actual filename should be 00000-000-20230119224931_70_7.5_87_120_None_Euler a_135842745_3fd17af06c_2023-01-19_20230119225119.png

note the %03d that should be 000 (or the intermediate step number)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.