Giter Site home page Giter Site logo

frame2frame's Introduction

frame2frame

Automatic1111 Stable Diffusion WebUI extension, generates img2img against frames in video files.

Still in-development but functional. Will succeed gif2gif in the near future.

  • Builds video file in real-time. Generates SD images and and writes video frame in same pass.
  • Accompanying .png file generated alongside video houses png information.
  • Attempts to extract and restore audio to file. To be made optional later.
  • Does not require intermediary images saved to disk, but you can choose to.
  • Currently defaults to H264 output, more codecs to be added.
  • Accepts GIFs as well
  • ControlNet extension handling improved:
    • Script will no longer overwrite existing ControlNet input images.
    • Script will only target ControlNet models with no input image specified.
    • Allows, for example, a static depth background while animation feeds openpose.

ControlNetInst

frame2frame's People

Contributors

timmahw avatar lonicamewinsky avatar

Stargazers

QWERTY avatar Robert Aaron avatar gonduras avatar 5l1v3r1 avatar  avatar Jonathon W. Marshall avatar  avatar  avatar pranav-two avatar Jaideep Singh avatar Viktor Replicart avatar Sherwin Marcelle avatar  avatar Mason Aviles avatar Vitalii Bychkov avatar  avatar  avatar Sandalots avatar Jonathan Fly avatar  avatar  avatar  avatar Ryan Mesaros avatar ahujack avatar  avatar WangXu avatar 42lux avatar  avatar MOИK avatar TemetNosce avatar StevensUneven avatar Paragoner avatar Paul Yun avatar dawei avatar  avatar  avatar David Marx avatar  avatar  avatar LHowell avatar  avatar MF avatar zhaoyun007 avatar toyxyz avatar Jin Liu avatar I.N.K.Y avatar  avatar

Watchers

Kostas Georgiou avatar Emre Koç avatar  avatar

frame2frame's Issues

[Bug] - Sometimes F2F breaks IMG2IMG even when it's not selected as a script - (Perhaps when a previously loaded video isn't closed before restart)

My IMG2IMG has been breaking w/ the below error sporadically for the past few days. I hadn't narrowed it down to a specific extension as I just kept disabling a bunch and eventually it would resolve. However, I did narrow it does to F2F today as I'm getting the error below if it's enabled, but if I disable it and restart I don't get the error.

I -think- what's going is that if you close F2F w/o clicking the [x] on the video (and removing it) that can cause problems. What's bad is that this error occurs even if F2F isn't selected in the script box and after a restart of the whole webui.

I've disabled F2F for now as I don't have a use for it and I know it's still a work in prog, but I definitely wanted to pass this along. If it's something related to my specific setup my apologies:


Closing server running on port: 7860
Restarting UI...
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 377.6s (import gradio: 0.9s, import ldm: 0.3s, other imports: 0.8s, list extensions: 0.4s, list SD models: 0.7s, load scripts: 0.7s, load SD checkpoint: 3.1s, scripts before_ui_callback: 361.6s, create ui: 8.6s, gradio launch: 0.3s, scripts app_started_callback: 0.2s).

Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1013, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 911, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 540, in preprocess
return self._round_to_precision(x, self.precision)
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 502, in _round_to_precision
return float(num)

ValueError: could not convert string to float: ''


Incompatibility with recent ControlNet code

When trying to use the script on the img2img tab I get this error.
(And I also get the same error with your gif2gif extension right now)

Traceback (most recent call last):
  File "D:\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "D:\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "D:\stable-diffusion\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
    processed = modules.scripts.scripts_img2img.run(p, *args)
  File "D:\stable-diffusion\stable-diffusion-webui\modules\scripts.py", line 399, in run
    processed = script.run(p, *script_args)
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 316, in run
    generated_frames.append(generate_frame(frame))
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 253, in generate_frame
    cn_layers = cnet.get_all_units_in_processing(orig_p)
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 94, in get_all_units_in_processing
    return get_all_units(p.scripts, p.script_args)
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 105, in get_all_units
    return get_all_units_from(script_args[cn_script.args_from:cn_script.args_to])
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 125, in get_all_units_from
    units.append(to_processing_unit(script_args[i]))
  File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 181, in to_processing_unit
    assert isinstance(unit, ControlNetUnit), f'bad argument to controlnet extension: {unit}\nexpected Union[dict[str, Any], ControlNetUnit]'
AssertionError: bad argument to controlnet extension: <scripts.external_code.ControlNetUnit object at 0x0000014C1BDBE080>
expected Union[dict[str, Any], ControlNetUnit]

WebUI commit: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
ControlNet commit: 8682c1e49728926bb7cc7753da8917d1ab095fb1

New technique for temporally stable video to video generation

Hey, couldn't find a way to reach out to you directly so I'm throwing it here.

I have figured out a new way to generate videos in SD that has great temporal stability, you can see an example of the output here:
https://i.imgur.com/H4KtklP.gif

The process is pretty simple, and I'd like to release it as an extension, but I don't do python. It seems like you have most of the pieces needed across your projects so I was curious if you might like to collaborate on adding this to your project or releasing it as a separate extension.

I don't really want to publicly disclose the secret sauce before releasing it, so if you'd like to discuss it you can reach me at [email protected]

[Bug] - FPS reduction slider can lose its place (give inaccurate results) after sliding/skipping

Hey again,

One minor UI issue I noticed playing around w/ "frame2frame" earlier. If you use the slider a bit in the same session (or slide, jump, slide, jump) it has a tendency to loses its place and come up with an inaccurate FPS value. I used OBS to cap a quick example (vid below). Minor, but wanted to mention. Thanks!

(am on the latest commit of both extension & a1111) / The MP4 in the sample was rendered out of Vegas at standard NTSC 29.97 fps

2clipbug.mp4

Error completing request.

Hello, I know you're just starting on this (I found it on a github search by recently updated Automatic1111 extensions). But due to that, I figured maybe you're looking for feedback. I installed using the URL, put a 15sec mp4 in the box (loaded fine), added a prompt, but got the following error when clicking generate. It could very well be my setup or something wrong w/ my personal installation/extensions/etc. I have had occasional crashes the past few weeks. Good luck and thanks for continuing to push the bar w/ SD via your work.

===
Restarting UI...
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Loading weights [ca13540e36] from e:\Stable Diffusion Checkpoints\2023-03-04 - Topnotch 66 - (Flip half) - 40img - (Flip half) - 3000 steps (of 5000 max).ckpt
Loading VAE weights specified in settings: C:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying xformers cross attention optimization.
Weights loaded in 1.2s (load weights from disk: 0.7s, apply weights to model: 0.1s, load VAE: 0.1s, move model to device: 0.2s).
Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
Will process 1 animation(s) with 3430 total generations.
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 1 images in a total of 1 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 28/28 [00:09<00:00, 2.85it/s]
Error completing request | 28/96040 [00:09<9:17:00, 2.87it/s]
Arguments: ('task(mzs0ni3fgdfr7pk)', 0, 'topnotch artstyle ', '', [], <PIL.Image.Image image mode=RGBA size=1920x1080 at 0x222C1B27A30>, None, {'image': <PIL.Image.Image image mode=RGBA size=1920x1080 at 0x222C1B25FF0>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1920x1080 at 0x222C1B278E0>}, None, None, None, None, 36, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 1080, 1920, 0, 0, 32, 0, '', '', '', [], 11, False, '', 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, True, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, <tempfile._TemporaryFileWrapper object at 0x00000222C1B264A0>, True, True, True, '29', '3430', True, True, True, 0, 0.1, 1, 'None', False, 0, 2, 512, 512, False, None, None, None, 50, False, 4.0, '', 10.0, 'Linear', 3, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "C:\stable-diffusion-webui\modules\scripts.py", line 376, in run
processed = script.run(p, *script_args)
File "C:\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 267, in run
out_clip.write_videofile(out_filename, codec='h264', progress_bar=False)
TypeError: VideoClip.write_videofile() got an unexpected keyword argument 'progress_bar'

ERROR:asyncio:Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
Will process 1 animation(s) with 459 total generations.
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 1 images in a total of 1 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 13.42it/s]
Error completing request | 55/96040 [00:55<3:42:01, 7.21it/s]
Arguments: ('task(xjtsx4lighawizg)', 0, 'topnotch artstyle ', '', [], <PIL.Image.Image image mode=RGBA size=960x720 at 0x222C178CFA0>, None, {'image': <PIL.Image.Image image mode=RGBA size=960x720 at 0x222C178F7C0>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=960x720 at 0x222C178F910>}, None, None, None, None, 36, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 720, 960, 0, 0, 32, 0, '', '', '', [], 11, False, '', 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, True, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 1, 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, <tempfile._TemporaryFileWrapper object at 0x00000222C178F070>, True, True, True, '29', '459', True, True, True, 0, 0.1, 1, 'None', False, 0, 2, 512, 512, False, None, None, None, 50, False, 4.0, '', 10.0, 'Linear', 3, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, '

Will upscale the image depending on the selected target size type

', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "C:\stable-diffusion-webui\modules\scripts.py", line 376, in run
processed = script.run(p, *script_args)
File "C:\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 267, in run
out_clip.write_videofile(out_filename, codec='h264', progress_bar=False)
TypeError: VideoClip.write_videofile() got an unexpected keyword argument 'progress_bar'

Error loading script: frame2frame.py

I have a problem loading the script and can´t figure out a solution for that.
Do you have an idea what the problem is and how to solve it?

ERROR:
Error loading script: frame2frame.py
Traceback (most recent call last):
File "C:\Users\Admin\stable-diffusion-webui\modules\scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\Admin\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\Admin\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 14, in
from moviepy.editor import VideoFileClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\editor.py", line 36, in
from .video.io.VideoFileClip import VideoFileClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 3, in
from moviepy.audio.io.AudioFileClip import AudioFileClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\audio\io\AudioFileClip.py", line 3, in
from moviepy.audio.AudioClip import AudioClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\audio\AudioClip.py", line 4, in
import proglog
ModuleNotFoundError: No module named 'proglog'

Has this project been abandoned?

Hey!

Just wanted to say thanks again for developing this. It's been 6months since and update and it doesn't seem like the sister extension (GIF2GIF) has had a recent update either. It had continued to work for a long while, butI don't think it currently loads properly on A1111 1.6. I guess I'm just wondering if the developer(s) have been busy or if this project is essentially done for good.

I really enjoyed it when it worked, and it'd be cool to have it as an option to use w/ SDXL along w/ Controlnet and the new IP-Adapter model.

Hope all is well!

Enhancement request: Individual Gif/videos per control net

I'm working on gif2gif right now, and as you stated previously it will be dropped soon in favor of this plugin so I'll drop this here.

I have a scene where I'm trying to add an appendage to the character, but would like the rest of the characters movement to stay the same as the source video. The most efficient way to do this is to generate a png output folder of the openpose preprocessing of the character and then run it one by one through img2img manually because there is no way to run a gif of an open net preprocess through with gif2gif. Is it at all possible to put a custom gif of the open pose output into the workflow? Might be asking too much, but better to ask and hear no, than to never ask at all!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.