Giter Site home page Giter Site logo

distyapps / seait Goto Github PK

View Code? Open in Web Editor NEW
679.0 17.0 40.0 16.38 MB

SEAIT is a user-friendly application that simplifies the installation process of AI-related projects

Python 100.00%
ai artificial-intelligence automatic1111 install installation installer invokeai large-language-models llm stable-diffusion

seait's People

Contributors

distyapps avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

seait's Issues

not able to downgrade git !!

i installed Viadiffusion - A1111 and not able to downgrade it to the a older commit as the new commit is having some issues

Python does not install properly

image
I tried installing python using this tool and it says it was successful. However, when launching Visioncrafter, it now states python is not installed and will not install from the store.

Auto-GPT: Can't open main.py after install

I am getting the following error after installing Auto-GPT. I'm on 0.0.8 version of Seait. I have tried deleting the folder and reinstalling Auto-GPT but I still get that error.

E:\seait.v0.0.8\Auto-GPT\scripts\main.py': [Errno 2] No such file or directory

Error

Since the last few updates I am getting this when trying to launch SD:

_launching stable-diffusion-webui
command run and script run
'J:\Super' is not recognized as an internal or external command,
operable program or batch file.

J:\Super Easy AI Installer Tool\stable-diffusion-webui>_

Update "Vladiffusion"

Hi, you've got a great app! Used it a few times to install text gen webui.

Just wanted to let you know to please change the "Vladiffusion - A1111 fork" to "SD.Next: Stable Diffusion Evolved", as that has never been its name, and it can't even rightly be called a fork at this point.

Thanks!

When launching the installation of the A11111 fork (vladmandic version), error with missing module named 'rich'

The installation for the vladmandic version of A11111 fails because a dependency is missing ('rich') in the automatically created venv:

(venv) D:\Stable-Diffusion\seait>seait
installing automatic with at D:\Stable-Diffusion\seait\automatic
Cloning into 'D:\Stable-Diffusion\seait\automatic'...
remote: Enumerating objects: 20808, done.
remote: Counting objects: 100% (16/16), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 20808 (delta 2), reused 12 (delta 2), pack-reused 20792Receiving objects: 100% (20808/20808), 29.11 MiB | Receiving objects: 100% (20808/20808), 30.74 MiB | 19.38 MiB/s, done.

Resolving deltas: 100% (14647/14647), done.
automatic cloned into D:\Stable-Diffusion\seait\automatic
Creating virtual environment for automatic at D:\Stable-Diffusion\seait\automatic\venv
Virtual environment created for automatic at D:\Stable-Diffusion\seait\automatic\venv
Traceback (most recent call last):
  File "D:\Stable-Diffusion\seait\automatic\launch.py", line 19, in <module>
    setup.parse_args()
  File "D:\Stable-Diffusion\seait\automatic\setup.py", line 446, in parse_args
    from modules.script_loading import preload_extensions
  File "D:\Stable-Diffusion\seait\automatic\modules\script_loading.py", line 3, in <module>
    import modules.errors as errors
  File "D:\Stable-Diffusion\seait\automatic\modules\errors.py", line 3, in <module>
    from rich import print # pylint: disable=redefined-builtin
ModuleNotFoundError: No module named 'rich'

possible to add?

Hi man very cool app, is it possible for you to add smthing that would allow you to download different models automatically, and from Civitai?

[Bug]: Error while installing Text Generation web UI

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

Traceback (most recent call last):
File "C:\Users\Daedar\seait\text-generation-webui\download-model.py", line 20, in
import tqdm
ModuleNotFoundError: No module named 'tqdm'
text-generation-webui Instructed installation completed
Traceback (most recent call last):
File "C:\Users\Daedar\seait\text-generation-webui\server.py", line 12, in
import gradio as gr
ModuleNotFoundError: No module named 'gradio'

Step-by-step instructions to reproduce the issue.

Just installing Text Generation web UI from the GUI

Expected Behavior

Launch after installation

Current Behavior

Error at:

Traceback (most recent call last):
File "C:\Users\Daedar\seait\text-generation-webui\download-model.py", line 20, in
import tqdm
ModuleNotFoundError: No module named 'tqdm'
text-generation-webui Instructed installation completed
Traceback (most recent call last):
File "C:\Users\Daedar\seait\text-generation-webui\server.py", line 12, in
import gradio as gr
ModuleNotFoundError: No module named 'gradio'

Version or Commit where the problem happens

0.1.4.7

What platforms do you use SEAIT ?

No response

What Python version are you running on ?

No response

What proccecor do you running on SEAIT ?

No response

What GPU are you running SEAIT on?

No response

How much GPU VRAM are you running SEAIT on?

No response

On what Project the issue are you facing?

No response

Console logs

installing text-generation-webui with at C:\Users\Daedar\seait\text-generation-webui
Cloning into 'C:\Users\Daedar\seait\text-generation-webui'...
remote: Enumerating objects: 10951, done.
remote: Counting objects: 100% (2794/2794), done.
remote: Compressing objects: 100% (478/478), done.
remote: Total 10951 (delta 2502), reused 2458 (delta 2313), pack-reused 8157
0% (10951/10951)
Receiving objects: 100% (10951/10951), 3.51 MiB | 9.33 MiB/s, done.
Resolving deltas: 100% (7475/7475), done.
text-generation-webui cloned into C:\Users\Daedar\seait\text-generation-webui
Creating virtual environment for text-generation-webui at C:\Users\Daedar\seait\text-generation-webui\venv
Virtual environment created for text-generation-webui at C:\Users\Daedar\seait\text-generation-webui\venv
Installing requirements for text-generation-webui at C:\Users\Daedar\seait\text-generation-webui\venv
Ignoring bitsandbytes: markers 'platform_system != "Windows"' don't match your environment
Collecting bitsandbytes==0.41.1 (from -r requirements.txt (line 27))
  Downloading https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl (152.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 152.7/152.7 MB 8.4 MB/s eta 0:00:00
ERROR: auto_gptq-0.4.2+cu117-cp310-cp310-win_amd64.whl is not a supported wheel on this platform.
text-generation-webui requirements installed at C:\Users\Daedar\seait\text-generation-webui\venv
Executing install instructions for text-generation-webui
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
Collecting torch
  Using cached https://download.pytorch.org/whl/cu117/torch-2.0.1%2Bcu117-cp311-cp311-win_amd64.whl (2343.6 MB)
Collecting torchvision
  Using cached https://download.pytorch.org/whl/cu117/torchvision-0.15.2%2Bcu117-cp311-cp311-win_amd64.whl (4.9 MB)
Collecting torchaudio
  Using cached https://download.pytorch.org/whl/cu117/torchaudio-2.0.2%2Bcu117-cp311-cp311-win_amd64.whl (2.5 MB)
Collecting filelock (from torch)
  Obtaining dependency information for filelock from https://files.pythonhosted.org/packages/52/90/45223db4e1df30ff14e8aebf9a1bf0222da2e7b49e53692c968f36817812/filelock-3.12.3-py3-none-any.whl.metadata
  Using cached filelock-3.12.3-py3-none-any.whl.metadata (2.7 kB)
Collecting typing-extensions (from torch)
  Obtaining dependency information for typing-extensions from https://files.pythonhosted.org/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl.metadata
  Using cached typing_extensions-4.7.1-py3-none-any.whl.metadata (3.1 kB)
Collecting sympy (from torch)
  Using cached sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting networkx (from torch)
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2 (from torch)
  Using cached https://download.pytorch.org/whl/Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting numpy (from torchvision)
  Obtaining dependency information for numpy from https://files.pythonhosted.org/packages/72/b2/02770e60c4e2f7e158d923ab0dea4e9f146a2dbf267fec6d8dc61d475689/numpy-1.25.2-cp311-cp311-win_amd64.whl.metadata
  Using cached numpy-1.25.2-cp311-cp311-win_amd64.whl.metadata (5.7 kB)
Collecting requests (from torchvision)
  Obtaining dependency information for requests from https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Obtaining dependency information for pillow!=8.3.*,>=5.3.0 from https://files.pythonhosted.org/packages/66/d4/054e491f0880bf0119ee79cdc03264e01d5732e06c454da8c69b83a7c8f2/Pillow-10.0.0-cp311-cp311-win_amd64.whl.metadata
  Using cached Pillow-10.0.0-cp311-cp311-win_amd64.whl.metadata (9.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Obtaining dependency information for MarkupSafe>=2.0 from https://files.pythonhosted.org/packages/be/bb/08b85bc194034efbf572e70c3951549c8eca0ada25363afc154386b5390a/MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl.metadata
  Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl.metadata (3.1 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision)
  Obtaining dependency information for charset-normalizer<4,>=2 from https://files.pythonhosted.org/packages/91/6e/db0e545302bf93b6dbbdc496dd192c7f8e8c3bb1584acba069256d8b51d4/charset_normalizer-3.2.0-cp311-cp311-win_amd64.whl.metadata
  Using cached charset_normalizer-3.2.0-cp311-cp311-win_amd64.whl.metadata (31 kB)
Collecting idna<4,>=2.5 (from requests->torchvision)
  Using cached https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision)
  Obtaining dependency information for urllib3<3,>=1.21.1 from https://files.pythonhosted.org/packages/9b/81/62fd61001fa4b9d0df6e31d47ff49cfa9de4af03adecf339c7bc30656b37/urllib3-2.0.4-py3-none-any.whl.metadata
  Using cached urllib3-2.0.4-py3-none-any.whl.metadata (6.6 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision)
  Obtaining dependency information for certifi>=2017.4.17 from https://files.pythonhosted.org/packages/4c/dd/2234eab22353ffc7d94e8d13177aaa050113286e93e7b40eae01fbf7c3d9/certifi-2023.7.22-py3-none-any.whl.metadata
  Using cached certifi-2023.7.22-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath>=0.19 (from sympy->torch)
  Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached Pillow-10.0.0-cp311-cp311-win_amd64.whl (2.5 MB)
Using cached filelock-3.12.3-py3-none-any.whl (11 kB)
Using cached numpy-1.25.2-cp311-cp311-win_amd64.whl (15.5 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Using cached charset_normalizer-3.2.0-cp311-cp311-win_amd64.whl (96 kB)
Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl (17 kB)
Using cached urllib3-2.0.4-py3-none-any.whl (123 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.3 certifi-2023.7.22 charset-normalizer-3.2.0 filelock-3.12.3 idna-3.4 jinja2-3.1.2 mpmath-1.3.0 networkx-3.1 numpy-1.25.2 pillow-10.0.0 requests-2.31.0 sympy-1.12 torch-2.0.1+cu117 torchaudio-2.0.2+cu117 torchvision-0.15.2+cu117 typing-extensions-4.7.1 urllib3-2.0.4
text-generation-webui Instructed installation completed
Collecting bitsandbytes==0.37.2
  Using cached https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.37.2-py3-none-any.whl (66.7 MB)
Installing collected packages: bitsandbytes
Successfully installed bitsandbytes-0.37.2
text-generation-webui Instructed installation completed
Traceback (most recent call last):
  File "C:\Users\Daedar\seait\text-generation-webui\download-model.py", line 20, in <module>
    import tqdm
ModuleNotFoundError: No module named 'tqdm'
text-generation-webui Instructed installation completed
Traceback (most recent call last):
  File "C:\Users\Daedar\seait\text-generation-webui\server.py", line 12, in <module>
    import gradio as gr
ModuleNotFoundError: No module named 'gradio'

Additional information

No response

[Bug]: AssertionError: Torch not compiled with CUDA enabled

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

Unable to run and install Kandinsky

Step-by-step instructions to reproduce the issue.

Simple Install button, unable to launch

Expected Behavior

Should install everything and launch correctly

Current Behavior

Not launching and giving the error as mentioned

Version or Commit where the problem happens

0.1.4.8

What platforms do you use SEAIT ?

Windows

What Python version are you running on ?

3.11.8

What proccecor do you running on SEAIT ?

No response

What GPU are you running SEAIT on?

RTX 3050

How much GPU VRAM are you running SEAIT on?

N.A.

On what Project the issue are you facing?

Kandinsky

Console logs

launching Kandinsky-2 at C:\Users\Admin\seait\Kandinsky-2
C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\huggingface_hub\file_download.py:678: FutureWarning: 'cached_download' is the legacy way to download files from the HF hub, please consider upgrading to 'hf_hub_download'
  warnings.warn(
Traceback (most recent call last):
  File "C:\Users\Admin\seait\Kandinsky-2\app.py", line 13, in <module>
    model = get_kandinsky2(
            ^^^^^^^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\kandinsky2\__init__.py", line 180, in get_kandinsky2
    model = get_kandinsky2_1(
            ^^^^^^^^^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\kandinsky2\__init__.py", line 160, in get_kandinsky2_1
    model = Kandinsky2_1(config, cache_model_name, cache_prior_name, device, task_type=task_type)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\kandinsky2\kandinsky2_1_model.py", line 64, in __init__
    self.clip_model, self.preprocess = clip.load(
                                       ^^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\clip\clip.py", line 139, in load
    model = build_model(state_dict or model.state_dict()).to(device)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\torch\nn\modules\module.py", line 1152, in to
    return self._apply(convert)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
    module._apply(fn)
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
    module._apply(fn)
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\torch\nn\modules\module.py", line 825, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\torch\nn\modules\module.py", line 1150, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Admin\seait\Kandinsky-2\venv\Lib\site-packages\torch\cuda\__init__.py", line 293, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Additional information

Looking for support - and hopefully an extension to inference with Kandinsky 2.2 :)

ERROR:root:Error getting NVIDIA GPU info: Not Supported

I gave access to the GP performance counters to all users, I tried to run seait.exe with administrator rights, the error still does not disappear. The program itself is in the root of the SD folder, but thinks it is not installed, AiPanic and Settings tabs are inactive. What can be the problem?

Symlink Windows UAC protection

Hi
By default you cannot create symbolic links for models right away because of Windows security.
Running the program as an administrator does not solve the problem either.
I think you need to add this information either in the program or in the readme

You have to add user rights following this instruction:
Open gpedit.msc
Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\Create symbolic links
Type the user name and click “Check Names” then OK.
Reboot the computer

Model Treasury and location related question

so i thought i could add my modals to treasury and it could use symlink for other projects but it turns out it moves all my files to treasury then symlinks the location, which is alright but i noticed 2 things that doesnt make sense to me;

1- project is installed to F drive but it still reverts to c:\users for every single new projects, is this intented? can i not make it default to F:\seait\projects?

2- i installed comfyui but it turns out i have manually create symlinks for every single models_treasury folder again. is this intended? can it not create symlinks automatically after installing a project?

text generation webui error on launch

text-generation-webui Instructed installation completed
Gradio HTTP request redirected to localhost :)
Traceback (most recent call last):
File "D:\seait_installers_version_0.1.2\text-generation-webui\server.py", line 44, in
from modules import chat, shared, training, ui
File "D:\seait_installers_version_0.1.2\text-generation-webui\modules\training.py", line 13, in
from peft import (LoraConfig, get_peft_model, prepare_model_for_int8_training,
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\peft_init_.py", line 22, in
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\peft\mapping.py", line 16, in
from .peft_model import (
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\peft\peft_model.py", line 31, in
from .tuners import (
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\peft\tuners_init_.py", line 21, in
from .lora import LoraConfig, LoraModel
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\peft\tuners\lora.py", line 40, in
import bitsandbytes as bnb
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes_init_.py", line 7, in
from .autograd.functions import (
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\autograd_init
.py", line 1, in
from ._functions import undo_layout, get_inverse_transform_indices
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\autograd_functions.py", line 9, in
import bitsandbytes.functional as F
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\functional.py", line 17, in
from .cextension import COMPILED_WITH_CUDA, lib
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\cextension.py", line 13, in
setup.run_cuda_setup()
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py", line 97, in run_cuda_setup
binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py", line 403, in evaluate_cuda_setup
cudart_path = determine_cuda_runtime_lib_path()
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py", line 264, in determine_cuda_runtime_lib_path
lib_ld_cuda_libs = find_cuda_lib_in(candidate_env_vars["PATH"])
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py", line 202, in find_cuda_lib_in
return get_cuda_runtime_lib_paths(
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py", line 186, in get_cuda_runtime_lib_paths
return {
File "D:\seait_installers_version_0.1.2\text-generation-webui\venv\lib\site-packages\bitsandbytes\cuda_setup\main.py", line 189, in
if (path / CUDA_RUNTIME_LIB).is_file()
File "C:\Users\angry\AppData\Local\Programs\Python\Python310\lib\pathlib.py", line 1322, in is_file
return S_ISREG(self.stat().st_mode)
File "C:\Users\angry\AppData\Local\Programs\Python\Python310\lib\pathlib.py", line 1097, in stat
return self._accessor.stat(self, follow_symlinks=follow_symlinks)
OSError: [WinError 1920] The file cannot be accessed by the system: 'C:\Users\angry\AppData\Local\Microsoft\WindowsApps\python.exe\cudart64_110.dll'

Detect already installed projects

Hello!
First, I'd like to thank you for this great tool, it's a big time saver!

I'd wanted to suggest adding an option to set a path to already installed projects or an option to scan a selected directory to detect already installed projects.
Also:
I think Seait has great potential to become a universal launcher for different AI projects, and since many of them are sharing same components, like a Stable Diffusion model for example. It would be great and helpful to be able to manage such sharable components directly from Seait.
Thank you for your work!
Keep it up!

[Feature Request]: Missing Arguments Lama Cleaner

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Summary

The Lama Cleaner is an excellent tool for post-editing images, especially when it comes to the training of LoRA Files. It aids in removing unnecessary elements and errors that could potentially compromise the final result.

However, there are certain arguments that are missing which could further enhance its effectiveness.

Description

The Lama Cleaner is available in two versions: Gradio GUI and Native GUI. The Native GUI also includes a file explorer that facilitates swift changes to the current image.

Enabling these arguments can enhance its functionality:
--input Path/to/InputFolder --output-dir Path/to/OutputFolder --gui

For convenience, you can configure the input and output to point to the same directory, which will result in overwriting the old files.

Additional information

No response

[Bug] v0.1.1 "Project cannot contain spaces, project ID X not found"

Running the tool works fine, but clicking on any of the tools available I get a looping error with Path cannot contain spaces, project ID X not found that causes the GUI to reload, and repeat, if I click a second project it will loop between the GUIs.

I ran seait from a path that does include a space and if I initially run it in a no space path then itll work(except trying to set the path to one without spaces) and I know that is a workaround but id rather have this program put in a place I want it rather than be confined to a no space path.

[Bug]: When Prompt is too long,VisionCrafter can't generate MP4 as usual

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

In VisionCraft, when the prompt is too long, the generation process may proceed normally, but there might be an error reported by FFMPEG at the end. This may be due to the long file name, which can result in failed GIF or image exports. When the prompt keywords are reduced, the video generation can proceed smoothly. It would be helpful to have a feature that allows custom video/project names to be used.

Step-by-step instructions to reproduce the issue.

The Bug Example:
Prompt:
Broca_b,male focus,animal ears,1boy,black hair,solo,muscular,short hair,yellow eyes,hood,tail,abs,cat ears,pectorals,muscular male,cat tail,nipples,clothes lift,pants,shirt lift,cat boy,lifted by self,official alternate costume,toned,toned male,clothes in mouth,mouth hold,shirt in mouth,censored,pubic hair,bar censor,male pubic hair,best quality,masterpiece,highres,realistic,male focus,looking at viewer
Negative:
realisticvision-negative-embedding,bad-picture-chill-75v,badv4,easynegative,ng_deepnegative_v1_75t,Unspeakable-Horrors-Composition-4v,verybadimagenegative_v1.2-6400,verybadimagenegative_v1.3,FastNegativeV2,CyberRealistic_Negative-neg,By bad artist -neg,bad face,bad anatomy,bad proportions,bad perspective,multiple views,concept art,reference sheet,mutated hands and fingers,interlocked fingers,twisted fingers,excessively bent fingers,more than five fingers,lowres,bad hands,text,error,missing fingers,extra digit,fewer digits,cropped,worst quality,low quality,normal quality,jpeg artifacts,signature,watermark,username,blurry,artist name,low quality lowres multiple breasts,low quality lowres mutated hands and fingers,more than two arms,more than two hands,more than two legs,more than two feet,low quality lowres long body,low quality lowres mutation poorly drawn,low quality lowres black-white,low quality lowres bad anatomy,low quality lowres liquid body,low quality lowres liquid tongue,low quality lowres disfigured,low quality lowres malformed,low quality lowres mutated,low quality lowres anatomical nonsense,low quality lowres text font ui,low quality lowres error,low quality lowres malformed hands,low quality lowres long neck,low quality lowres blurred,low quality lowres lowers,low quality lowres low res,low quality lowres bad proportions,low quality lowres bad shadow,low quality lowres uncoordinated body,low quality lowres unnatural body,low quality lowres fused breasts,low quality lowres bad breasts,low quality lowres huge breasts,low quality lowres poorly drawn breasts,low quality lowres extra breasts,low quality lowres liquid breasts,low quality lowres heavy breasts,low quality lowres missing breasts,low quality lowres huge haunch,low quality lowres huge thighs,low quality lowres huge calf,low quality lowres bad hands,low quality lowres fused hand,low quality lowres missing hand,low quality lowres disappearing arms,low quality lowres disappearing thigh,low quality lowres disappearing calf,low quality lowres disappearing legs,low quality lowres fused ears,low quality lowres bad ears,low quality lowres poorly drawn ears,low quality lowres extra ears,low quality lowres liquid ears,low quality lowres heavy ears,low quality lowres missing ears,low quality lowres fused animal ears,low quality lowres bad animal ears,low quality lowres poorly drawn animal ears,low quality lowres extra animal ears,low quality lowres liquid animal ears,low quality lowres heavy animal ears,low quality lowres missing animal ears,low quality lowres text,low quality lowres ui,low quality lowres missing fingers,low quality lowres missing limb,low quality lowres fused fingers,low quality lowres one hand with more than 5 fingers,low quality lowres one hand with less than 5 fingers,low quality lowres one hand with more than 5 digit,low quality lowres one hand with less than 5 digit,low quality lowres extra digit,low quality lowres fewer digits,low quality lowres fused digit,low quality lowres missing digit,low quality lowres bad digit,low quality lowres liquid digit,low quality lowres colorful tongue,low quality lowres black tongue,low quality lowres cropped,low quality lowres watermark,low quality lowres username,low quality lowres blurry,low quality lowres JPEG artifacts,low quality lowres signature,low quality lowres 3D,low quality lowres 3D game,low quality lowres 3D game scene,low quality lowres 3D character,low quality lowres malformed feet,low quality lowres extra feet,low quality lowres bad feet,low quality lowres poorly drawn feet,low quality lowres fused feet,low quality lowres missing feet,low quality lowres extra shoes,low quality lowres bad shoes,low quality lowres fused shoes,low quality lowres more than two shoes,low quality lowres poorly drawn shoes,low quality lowres bad gloves,low quality lowres poorly drawn gloves,low quality lowres fused gloves,low quality lowres bad cum,low quality lowres poorly drawn cum,low quality lowres fused cum,low quality lowres bad hairs,low quality lowres poorly drawn hairs,low quality lowres fused hairs,low quality lowres big muscles,low quality lowres ugly,low quality lowres bad face,low quality lowres fused face,low quality lowres poorly drawn face,low quality lowres cloned face,low quality lowres big face,low quality lowres long face,low quality lowres bad eyes,low quality lowres fused eyes poorly drawn eyes,low quality lowres extra eyes,low quality lowres malformed limbs,low quality lowres more than 2 nipples,low quality lowres missing nipples,low quality lowres different nipples,low quality lowres fused nipples,low quality lowres bad nipples,low quality lowres poorly drawn nipples,low quality lowres black nipples,low quality lowres colorful nipples,low quality lowres gross proportions,short arm,low quality lowres missing arms,low quality lowres missing thighs,low quality lowres missing calf,low quality lowres missing legs,low quality lowres mutation,low quality lowres duplicate,low quality lowres morbid,low quality lowres mutilated,low quality lowres poorly drawn hands,low quality lowres more than 1 left hand,low quality lowres more than 1 right hand,low quality lowres deformed,low quality lowres extra arms,low quality lowres extra thighs,low quality lowres more than 2 thighs,low quality lowres extra calf,low quality lowres fused calf,low quality lowres extra legs,low quality lowres bad knee,low quality lowres extra knee,low quality lowres more than 2 legs,low quality lowres bad tails,low quality lowres bad mouth,low quality lowres fused mouth,low quality lowres poorly drawn mouth,low quality lowres bad tongue,low quality lowres tongue within mouth,low quality lowres too long tongue,low quality lowres big mouth,low quality lowres cracked mouth,low quality lowres dirty face,low quality lowres dirty teeth,low quality lowres dirty pantie,low quality lowres fused pantie,low quality lowres poorly drawn pantie,low quality lowres fused cloth,low quality lowres poorly drawn cloth,low quality lowres bad pantie,low quality lowres yellow teeth,low quality lowres thick lips,low quality lowres bad asshole,low quality lowres poorly drawn asshole,low quality lowres fused asshole,low quality lowres missing asshole,low quality lowres bad anus,low quality lowres bad pussy,low quality lowres bad crotch,low quality lowres bad crotch seam,low quality lowres fused anus,low quality lowres fused pussy,low quality lowres fused crotch,low quality lowres poorly drawn crotch,low quality lowres fused seam,low quality lowres poorly drawn anus,low quality lowres poorly drawn pussy,low quality lowres poorly drawn crotch seam,low quality lowres bad thigh gap,low quality lowres missing thigh gap,low quality lowres fused thigh gap,low quality lowres liquid thigh gap,low quality lowres poorly drawn thigh gap,low quality lowres bad collarbone,low quality lowres fused collarbone,low quality lowres missing collarbone,low quality lowres liquid collarbone,low quality lowres strong girl,low quality lowres obesity,low quality lowres worst quality,low quality lowres low quality,low quality lowres normal quality,low quality lowres liquid tentacles,low quality lowres bad tentacles,low quality lowres poorly drawn tentacles,low quality lowres split tentacles,low quality lowres fused tentacles,low quality lowres missing clit,low quality lowres bad clit,low quality lowres fused clit,low quality lowres colorful clit,low quality lowres black clit,low quality lowres liquid clit,low quality lowres QR code,low quality lowres bar code,low quality lowres censored,low quality lowres safety panties,low quality lowres safety knickers,low quality lowres beard,low quality lowres furry,pony,low quality lowres pubic hair,low quality lowres mosaic,low quality lowres excrement,low quality lowres shit,low quality lowres futa,low quality lowres testis,low quality lowres lowres,low quality lowres terrible,low quality lowres dirty,low quality lowres feces,low quality lowres organs,low quality lowres fat,low quality lowres thick thighs,low quality lowres low resolution rough,low quality lowres pedophile,low quality lowres bestiality,low quality lowres parody,low quality lowres traditional media,low quality lowres koma,low quality lowres comic,low quality lowres scary,low quality lowres severe,low quality lowres insects,low quality lowres gross scars,low quality lowres twisted human body,low quality lowres irrational human body,low quality lowres sharp fingers,low quality lowres parts of the body out of common sense,low quality lowres murder,low quality lowres beheading,low quality lowres zombie,low quality lowres mummy,low quality lowres graffiti,low quality lowres unfinished picture,low quality lowres terrible quality,low quality lowres Coprophilia,low quality lowres muscular,low quality lowres bald,low quality lowres monk,low quality lowres wrinkly,low quality lowres simple background,low quality lowres realistic,low quality lowres old,low quality lowres scan,low quality lowres touhou,low quality lowres yaoi,low quality lowres gay,low quality lowres femboy,low quality lowres trap,low quality lowres pee,low quality lowres doujinshi,low quality lowres monochrome,low quality lowres meme,low quality lowres demon,low quality lowres monstrous creature,low quality lowres tentacle,low quality lowres self harm,low quality lowres vomit,low quality lowres suicide,low quality lowres death,low quality lowres corpse,low quality lowres bone,low quality lowres skeleton,low quality lowres fingers over 6,low quality lowres framed,low quality lowres historical picture,low quality lowres futanari,low quality lowres shemale,low quality lowres transgender,low quality lowres dick girl,low quality lowres flat breasts,low quality lowres degenerate ass,low quality lowres retro artstyle,low quality lowres anime screencap,low quality lowres stitched,low quality lowres pokemon,low quality lowres ryona,low quality lowres animal,low quality lowres male focus,low quality lowres nipple penetration,low quality lowres sonic (series),low quality lowres bondage,low quality lowres bdsm,low quality lowres 2D,low quality lowres 2D game,low quality lowres 2D game scene,low quality lowres 2D character,low quality lowres game cg,low quality lowres watercolor (medium),low quality lowres 2koma,low quality lowres interlocked fingers,low quality lowres gloves,low quality lowres nitroplus,low quality lowres grayscale,low quality lowres sketch,low quality lowres line drawing,low quality lowres gorilla,low quality lowres meat,low quality lowres gundam,low quality lowres multiple views,low quality lowres cut,low quality lowres concept art,low quality lowres reference sheet,low quality lowres turnaround,low quality lowres chart,low quality lowres comparison,low quality lowres artist progress,low quality lowres lineup,low quality lowres before and after,low quality lowres orc,low quality lowres tusks,low quality lowres goblin,low quality lowres kobold,low quality lowres pony,low quality lowres,low quality lowres Humpbacked,low quality lowres text error,low quality lowres extra digits,low quality lowres standard quality,low quality lowres large breasts,low quality lowres shadow,low quality lowres nude,low quality lowres artist name,low quality lowres skeleton girl,low quality lowres bad legs,low quality lowres missing fingers,low quality lowres extra digit,low quality lowres artifacts,low quality lowres bad body,low quality lowres optical illusion,low quality lowres Glasses,low quality lowres girl,low quality lowres women,low quality lowres more than 1 moon,low quality lowres Multi foot,low quality lowres Multifold,low quality lowres Multi fingering,low quality lowres colored sclera,low quality lowres monster girl,low quality lowres Black hands,low quality lowres The background is incoherent,low quality lowres abnormal eye proportion,low quality lowres Abnormal hands,low quality lowres abnormal legs,low quality lowres abnormal feet abnormal fingers,low quality lowres sharp face,low quality lowres tranny,low quality lowres mutated hands,low quality lowres extra limbs,low quality lowres too many fingers,low quality lowres unclear eyes,low quality lowres bad,low quality lowres mutated hand and finger,low quality lowres malformed mutated,low quality lowres broken limb,low quality lowres incorrect limb,low quality lowres fusion finger,low quality lowres lose finger,low quality lowres multiple finger,low quality lowres multiple digit,low quality lowres fusion hand,low quality lowres lose leg,low quality lowres fused leg,low quality lowres multiple leg,low quality lowres bad cameltoe,low quality lowres colorful cameltoe,low quality lowres low polygon 3D game,(over three finger(fingers excluding thumb):2),(fused anatomy),(bad anatomy(body)),(bad anatomy(hand)),(bad anatomy(finger)),(over four fingers(finger):2),(bad anatomy(arms)),(over two arms(body)),(bad anatomy(leg)),(over two legs(body)),(interrupted(body, arm, leg, finger, toe)),(bad anatomy(arm)),(bad detail(finger):1.2),(bad anatomy(fingers):1.2),(multiple(fingers):1.2),(bad anatomy(finger):1.2),(bad anatomy(fingers):1.2),(fused(fingers):1.2),(over four fingers(finger):2),(multiple(hands)),(multiple(arms)),(multiple(legs)),(over three toes(toes excluding big toe):2),(bad anatomy(foot)),(bad anatomy(toe)),(over four toes(toe):2),(bad detail(toe):1.2),(bad anatomy(toes):1.2),(multiple(toes):1.2),(bad anatomy(toe):1.2),(bad anatomy(toes):1.2),(fused(toes):1.2),(over four toes(toe):2),(multiple(feet)),shoes,female,mature female,andromorph,intersex,gynomorph,full-package futanari,futa with male,futanari,vaginal,clitoris,(pussy:2),female pubic hair,muscular female,female ejaculation,female orgasm,mature female,self fondle,male with breasts,breasts,woman,(female focus),pokemon

Expected Behavior

Bug Report:
C:\Users\conan\seait\VisionCrafter\outputs\result-2023-09-07T16-46-25\results\mp4\0-Broca_b,male-focus,animal-ears,1boy,black-hair,solo,muscular,short-hair,yellow-eyes,hood,tail,abs,cat-ears,pectorals,muscular-male,cat-tail,nipples,clothes-lift,pants,shirt.mp4: No such file or directory
Exception in thread Thread-8 (animate_main_t):
Traceback (most recent call last):
File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg_io.py", line 479, in write_frames
p.stdin.write(bb)
OSError: [Errno 22] Invalid argument

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\conan\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\conan\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\conan\seait\VisionCrafter\main.py", line 295, in animate_main_t
animate_main(args,window)
File "C:\Users\conan\seait\VisionCrafter\repos\animatediff\scripts\animate.py", line 211, in main
save_videos_grid(sample, f"{savedir}/results/mp4/{sample_idx}-{prompt}.mp4")
File "C:\Users\conan\seait\VisionCrafter\repos\animatediff\animatediff\utils\util.py", line 32, in save_videos_grid
imageio.mimsave(path, outputs, fps=fps, codec='h264', quality=10, pixelformat='yuv420p')
File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio\core\functions.py", line 418, in mimwrite
writer.append_data(im)
File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio\core\format.py", line 502, in append_data
return self._append_data(im, total_meta)
File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio\plugins\ffmpeg.py", line 574, in _append_data
self._write_gen.send(im)
File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg_io.py", line 486, in write_frames
raise IOError(msg)
OSError: [Errno 22] Invalid argument

FFMPEG COMMAND:
C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg\binaries\ffmpeg-win64-v4.2.2.exe -y -f rawvideo -vcodec rawvideo -s 512x512 -pix_fmt rgb24 -r 8.00 -i - -an -vcodec h264 -pix_fmt yuv420p -qscale:v 1 -v warning C:\Users\conan\seait\VisionCrafter\outputs\result-2023-09-07T16-46-25\results\mp4\0-Broca_b,male-focus,animal-ears,1boy,black-hair,solo,muscular,short-hair,yellow-eyes,hood,tail,abs,cat-ears,pectorals,muscular-male,cat-tail,nipples,clothes-lift,pants,shirt.mp4

FFMPEG STDERR OUTPUT:

Current Behavior

The Example without any issuses
Prompt:
Broca_b,male focus,animal ears,1boy,black hair,solo,muscular,short hair,yellow eyes,hood

Negative not changed

Version or Commit where the problem happens

SEAIT 0.1.4.7 with VisionCraft 0.0.6

What platforms do you use SEAIT ?

Windows

What Python version are you running on ?

3.10.9

What proccecor do you running on SEAIT ?

GPU

What GPU are you running SEAIT on?

RTX 3060Ti

How much GPU VRAM are you running SEAIT on?

8GB

On what Project the issue are you facing?

SEAIT

Console logs

launching VisionCrafter at C:\Users\conan\seait\VisionCrafter
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'text_projection.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'visual_projection.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'logit_scale', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [09:46<00:00, 29.31s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:37<00:00,  2.36s/it]
[libx264 @ 0000019a372d0800] -qscale is ignored, -crf is recommended.
[libx264 @ 000002be448005c0] -qscale is ignored, -crf is recommended.
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'text_projection.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'visual_projection.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'logit_scale', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Token indices sequence length is longer than the specified maximum sequence length for this model (115 > 77). Running this sequence through the model will result in indexing errors
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: [', mouth hold, shirt in mouth, censored, pubic hair, bar censor, male pubic hair, best quality, masterpiece, highres, realistic, male focus, looking at viewer']
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [08:20<00:00, 25.05s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:36<00:00,  2.26s/it]
C:\Users\conan\seait\VisionCrafter\outputs\result-2023-09-07T16-36-16\results\mp4\0-Broca_b,male-focus,animal-ears,1boy,black-hair,solo,muscular,short-hair,yellow-eyes,hood,tail,abs,cat-ears,pectorals,muscular-male,cat-tail,nipples,clothes-lift,pants,shirt.mp4: No such file or directory
Exception in thread Thread-5 (animate_main_t):
Traceback (most recent call last):
  File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg\_io.py", line 479, in write_frames
    p.stdin.write(bb)
OSError: [Errno 22] Invalid argument

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\conan\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\conan\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\conan\seait\VisionCrafter\main.py", line 295, in animate_main_t
    animate_main(args,window)
  File "C:\Users\conan\seait\VisionCrafter\repos\animatediff\scripts\animate.py", line 211, in main
    save_videos_grid(sample, f"{savedir}/results/mp4/{sample_idx}-{prompt}.mp4")
  File "C:\Users\conan\seait\VisionCrafter\repos\animatediff\animatediff\utils\util.py", line 32, in save_videos_grid
    imageio.mimsave(path, outputs, fps=fps, codec='h264', quality=10, pixelformat='yuv420p')
  File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio\core\functions.py", line 418, in mimwrite
    writer.append_data(im)
  File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio\core\format.py", line 502, in append_data
    return self._append_data(im, total_meta)
  File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio\plugins\ffmpeg.py", line 574, in _append_data
    self._write_gen.send(im)
  File "C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg\_io.py", line 486, in write_frames
    raise IOError(msg)
OSError: [Errno 22] Invalid argument

FFMPEG COMMAND:
C:\Users\conan\seait\VisionCrafter\venv\lib\site-packages\imageio_ffmpeg\binaries\ffmpeg-win64-v4.2.2.exe -y -f rawvideo -vcodec rawvideo -s 512x512 -pix_fmt rgb24 -r 8.00 -i - -an -vcodec h264 -pix_fmt yuv420p -qscale:v 1 -v warning C:\Users\conan\seait\VisionCrafter\outputs\result-2023-09-07T16-36-16\results\mp4\0-Broca_b,male-focus,animal-ears,1boy,black-hair,solo,muscular,short-hair,yellow-eyes,hood,tail,abs,cat-ears,pectorals,muscular-male,cat-tail,nipples,clothes-lift,pants,shirt.mp4

FFMPEG STDERR OUTPUT:

Additional information

No response

[Bug]: WhisperUI does not start.

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

When installing WhisperUI and trying to start it, an error occurs and it does not start.

C:\Users\Myname\AppData\Local\Programs\Python\Python311\python.exe: can't open file 'C:\Users\Myname\seait\whisper-ui\streamlit': [Errno 2] No such file or directory

Step-by-step instructions to reproduce the issue.

Just install WhisperUI.

Expected Behavior

Starting WhisperUI.

Current Behavior

C:\Users\Tux55\AppData\Local\Programs\Python\Python311\python.exe: can't open file 'C:\Users\Tux55\seait\whisper-ui\streamlit': [Errno 2] No such file or directory

(venv) C:\Users\Tux55\seait\whisper-ui>

Version or Commit where the problem happens

seait 0.1.4.8

What platforms do you use SEAIT ?

Windows

What Python version are you running on ?

python 3.11.5

What proccecor do you running on SEAIT ?

GPU

What GPU are you running SEAIT on?

GTX1660

How much GPU VRAM are you running SEAIT on?

No response

On what Project the issue are you facing?

WhisperUI

Console logs

C:\Users\Tux55\AppData\Local\Programs\Python\Python311\python.exe: can't open file 'C:\\Users\\Tux55\\seait\\whisper-ui\\streamlit': [Errno 2] No such file or directory

(venv) C:\Users\Tux55\seait\whisper-ui>

Additional information

No response

Can't get it to run

IT creates preference files and then the Commandline window crashes. same for 1.3.
I have a1111 installed, possible issue?

xformers on Vladmandic ?

At launch, there is a message with No module 'xformers'. Proceeding without it.
Do I need with Vladmandic ? how to install it ?
Thanks for your work !

Arguments extra options

Could you add more Arguments for vladmandic?

--api

Maybe just like how the other UI-UK - A1111 fork you added from anapnoe

The other thing, If can you also add option where we can add other things to the webui-user.bat it will be great.
example:
--opt-sdp-attention

While some other extentions need to set model location just like SADTALKER extention

example:
set SADTALKER_CHECKPOINTS=E:\stable-diffusion-webui\extensions\SadTalker\checkpoints

And some people like to stay up to date so we can add:
git pull

but each of those goes into its line of code.

Hope what I say make some sense ;-)

[Bug] 窗口不能缩放 (window cannot be scaled)

显示器分辨率低,部分按钮没法显示出来。下面部分也没有滚动条

(The display resolution is low, and some buttons cannot be displayed. There is also no scrollbar in the lower section.)

[Issue] InvokeAI wont launch

Just installed InvokeAI and went through all the steps in the terminal to configure it. When I go to launch it, nothing happens. It tried to launch but encounters an error. I am unable to locate invoke.bat to try and launch it manually.

Terminal output after attempting to upgrade and launch:

Updating InvokeAI at D:\ai\InvokeAI
Already up to date.
launching InvokeAI at D:\ai\InvokeAI
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
[2023-07-22 14:31:13,753]::[InvokeAI]::INFO --> Patchmatch initialized
D:\ai\InvokeAI\venv\lib\site-packages\torchvision\transforms\functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(
[2023-07-22 14:31:13,988]::[InvokeAI]::INFO --> InvokeAI version 3.0.0
[2023-07-22 14:31:14,010]::[InvokeAI]::INFO --> GPU device = cuda NVIDIA GeForce RTX 3080 Ti
[2023-07-22 14:31:14,024]::[InvokeAI]::INFO --> Scanning C:\Users\USERNAME\invokeai\models for new models
[2023-07-22 14:31:14,405]::[InvokeAI]::INFO --> Scanned 0 files and directories, imported 0 models
[2023-07-22 14:31:14,417]::[InvokeAI]::INFO --> Model manager service initialized
[2023-07-22 14:31:14,422]::[InvokeAI]::INFO --> InvokeAI database location is "C:\Users\USERNAME\invokeai\databases\invokeai.db"
invoke>

bark no audio backend

trying to create a cloned voice but I get this error, may be related to either soundfile or PySoundFile, or mor likely missing torch audio installation/env

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\blocks.py", line 1022, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\helpers.py", line 588, in tracked_fn
response = fn(*args)
^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\bark\clonevoice.py", line 20, in clone_voice
wav, sr = torchaudio.load(audio_filepath)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\torchaudio\backend\no_backend.py", line 16, in load
raise RuntimeError("No audio I/O backend is available.")
RuntimeError: No audio I/O backend is available.

How to Stop an AI

Just curious.
Since all AI´s are loaded in the same CMD Window. How can i stop an AI if i don´t need it anymore seperatly?
I don´t see any "Stop" Button ^^

xformers for auto1111

I have NVIDIA GeForce GTX 980 Ti , it's 6GB VRAM GPU, in invoke ai I can render up to 1280x1024 without upscaling and sometimes even larger dimention, invoke ai enables xformers by default which allow it to make better use of the available VRAM.

I tried so many times to install xformers for auto1111 with no luck!
Maybe you can create a auto1111 fork or one-click-installation with xformers specially for some of the older GPU cards...

Thanks for your time...

[Feature Request] Server Edition

Good day,

i was thinking. Wouldn´t it be Possible to make a Server Application with this?
Like a Central Ai Core with a Web Interface from wich someone can start or stop desired AI´s

Currently its installed in my datacenter and i enable/disable it via RDC. (GPU Cluster Maschine)
A Purely Web Based Administration would be really Awsome.

[Bug]: InvokeAI Installation or Launch failed

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

Bug Description

launching InvokeAI at C:\Users\H1ghSyst3m\seait\InvokeAI
C:\Users\H1ghSyst3m\seait\InvokeAI\venv\Scripts\python.exe: can't find '__main__' module in 'C:\\Users\\H1ghSyst3m\\seait\\InvokeAI\\invokeai'

(venv) C:\Users\H1ghSyst3m\seait\InvokeAI>

Step-by-step instructions to reproduce the issue.

Install InvokeAI and launch

Expected Behavior

It launches and works

Current Behavior

After I start the installation

the Error.

launching InvokeAI at C:\Users\H1ghSyst3m\seait\InvokeAI
C:\Users\H1ghSyst3m\seait\InvokeAI\venv\Scripts\python.exe: can't find '__main__' module in 'C:\\Users\\H1ghSyst3m\\seait\\InvokeAI\\invokeai'

(venv) C:\Users\H1ghSyst3m\seait\InvokeAI>

appeared, but in SEAIT it shows that is installed and ready to launch. On launch the same error

Version or Commit where the problem happens

0.1.4.7

What platforms do you use SEAIT ?

Windows

What Python version are you running on ?

3.10.6

What proccecor do you running on SEAIT ?

GPU, CPU

What GPU are you running SEAIT on?

3090 Ti

How much GPU VRAM are you running SEAIT on?

No response

On what Project the issue are you facing?

InvokeAI

Console logs

launching InvokeAI at C:\Users\H1ghSyst3m\seait\InvokeAI
C:\Users\H1ghSyst3m\seait\InvokeAI\venv\Scripts\python.exe: can't find '__main__' module in 'C:\\Users\\H1ghSyst3m\\seait\\InvokeAI\\invokeai'

(venv) C:\Users\H1ghSyst3m\seait\InvokeAI>

Additional information

No response

Install buttons disabled

All the install buttons disabled. But the Pythin install button. Python runs throught the installation process but then SEAIT still doesn't find the Python. Tried running in both user and administrator mode. Windows 10 x64
Actually i have a working automatic1111 installation, so i have python and git

Bark-GUI - can't generate audio with custom voice

I used Bark-GUI to clone a prompt from an audio sample, that worked great. When I try to create speech from text using the custom voice I get the following error, I am able to create arbitrary audio from text using the pre-built prompts. This error only happens when I have "use coarse history" checked. Possibly this is a bark-gui problem.

Generating Text (1/1) -> custom\MeMyselfAndI:Hello Sir, How can I help you today?
Traceback (most recent call last):
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\routes.py", line 399, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\blocks.py", line 1022, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\venv\Lib\site-packages\gradio\helpers.py", line 588, in tracked_fn
response = fn(*args)
^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\webui.py", line 114, in generate_text_to_speech
audio_array = generate_audio(text, selected_speaker, text_temp, waveform_temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\bark\api.py", line 113, in generate_audio
out = semantic_to_waveform(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\bark\api.py", line 54, in semantic_to_waveform
coarse_tokens = generate_coarse(
^^^^^^^^^^^^^^^^
File "C:\Users\toor\Desktop\Ai\seait_installers_version_0.1.4\bark-gui\bark\generation.py", line 592, in generate_coarse
round(x_coarse_history.shape[-1] / len(x_semantic_history), 1)
AssertionError

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.