Giter Site home page Giter Site logo

fooocus's Introduction

fooocus's People

Contributors

camenduru avatar cocktailpeanut avatar crohrer avatar daswer123 avatar dooglewoogle avatar eddyizm avatar hangover3832 avatar hisk2323 avatar hswlab avatar hydra213 avatar iomisaka avatar josephrocca avatar lllyasviel avatar mashb1t avatar mindofmatter avatar moonride303 avatar nbs avatar oivasenk avatar rayleichenxi avatar rsl8 avatar shaye059 avatar shinshin86 avatar tcmaps avatar ttio2tech avatar v1sionverse avatar wari-dudafa avatar whitehara avatar xhoxye avatar zaldos avatar zxilly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fooocus's Issues

Unclear Install Instructions

Hey, this project looks really cool and I want to try it out.

You pride yourself in being easy to install and get going

Fooocus has simplified the installation. Between pressing "download" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia).

But I'm unable to progress. I uncompressed the file and I cannot find any run.bat file as you mention. There are however two run_cpu.bat and one for NVidia GPU as well, but not in the root folder. Are those the ones you are supposed to run?

Even when I run the GPU one (since I have AMD), it just pauses and then exits. Mentioning that main.py is not in the folder.

Would you mind making the run.bat step more clear so I can progress?

img2img

Very amazing creation, would it be possible to add a img2img to this?

upload

can't see a option to upload an image and thus change the image using prompts.

[Feature request] Propose a shared model location

There are so many SD-webuis now and each is saving the models inside it's own directory structure. Novice users unbeknownst of symlinking or launch parameter will divert to upgrading their drives, uneedingly wasting resources.

Someone should take the lead and define a default location in AppData/Local on Windows or the user's home directory on *nix. HF diffusers is a positive example with using a system-wide cache.

problem with bowtie mapping

Dears SqueezeMeta developpers,
I've been trying to analyse my data for several days now but I always get an error on step 10. The first sample passes but when it wants to start the 2nd sample, I always get an error like: the program has finished abnormally.

[34m[6 days, 16 hours, 3 minutes, 6 seconds]: STEP10 -> MAPPING READS: 10.mapsamples.pl
�[0m Reading samples from /pasteur/zeus/projets/p02/Biomics/Bioinfo/projet_Ariane/Result/data/00.Result.samples
Metagenomes found: 32
Mapping with Bowtie2 (Langmead and Salzberg 2012, Nat Methods 9(4), 357-9)
Creating reference from contigs
Working with sample 1: B1
Getting raw reads
Aligning to reference with bowtie
Calculating contig coverage
Reading contig length from /pasteur/zeus/projets/p02/Biomics/Bioinfo/projet_Ariane/Result/intermediate/01.Result.lon
Counting with sqm_counter: Opening 36 threads
1626281 reads counted
3252561 reads counted
4878841 reads counted
6505121 reads counted
8131401 reads counted
9757681 reads counted
11383961 reads counted
13010241 reads counted
14636521 reads counted
16262801 reads counted
17889081 reads counted
19515361 reads counted
21141641 reads counted
22767921 reads counted
24394201 reads counted
26020481 reads counted
27646761 reads counted
29273041 reads counted
30899321 reads counted
32525601 reads counted
34151881 reads counted
35778161 reads counted
37404441 reads counted
39030721 reads counted
40657001 reads counted
reads counted
42283281 reads counted
reads counted
reads counted
reads counted
reads counted
reads counted
reads counted
reads counted
reads counted
reads counted
Working with sample 2: B10
Getting raw reads
Aligning to reference with bowtie
�[31mStopping in STEP10 -> 10.mapsamples.pl. Program finished abnormally
�[0m�[31m

Before I was using the previous version, it didn't work and a week ago, I updated the pipline and I'm on the latest version (1.6.2) but I still have the same problem. could you help me solve the problem? thank you for your help.
please find attached the syslog.

syslog.zip

CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

OS :WINSERVER2016
DEVICE:TESLA T4
CUDA Version: 12.2

File "C:\Fooocus_win64_1-1-10\Fooocus\modules\core.py", line 190, in ksampler_with_refiner
previewer = get_previewer(device, model.model.latent_format)
File "C:\Fooocus_win64_1-1-10\Fooocus\modules\core.py", line 79, in get_previewer
taesd = TAESD(None, taesd_decoder_path).to(device)
File "C:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "C:\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

THX..

Issues with random seed values

  1. Large random seed values are not workings - seems to be artificially capped at 65535 (in async_worker.py - it changes seed to random value from this tiny entropy range, when this threshold is exceeded). We should be able to generate like billions (int range) of different images for the same prompts if we wanted to, not just few thousands. People also might also have their favorite numbers they like to use as random seed, that are much bigger than mere 2**16. Both A1111 and ComfyUI allow much bigger values than 65535.
  2. With current implementation we don't have information about what seed was used to generate the image, when it was random value - it would be nice to have it displayed somewhere in the UI (per image, or at least for first image from the batch).

Missing run.bat file

There is currently no "run.bat" as referred to in the readme.md file.

edit: This was after cloning the repo into a directory and downloading the repo zip.
I did not realize the download in the readme was a different link. Why exactly is it setup like this?

How can I get fooocus to listen on my ip address instead of localhost?

It works on 127.0.0.1, but I want it to run off of my network ip of 192.168.68.100 instead. That way I can access it from other computers in the home instead of just being on console.

There is a share=True setting mentioned at startup, but no mention of which file to put that into. It doesn't work on the start.bat file, and I'm not sure if this setting does what I'd like. It doesn't seem like I should be editing the source files when the launcher does a git pull at startup either....

any idea?

thanks

Runtime flags? Is there a way to disable cuda_malloc or xformers?

I have tried to run this project, but it takes 15 minutes to load into the UI each time.

Not sure what is causing it to be so slow, I do see that my GPU is never loaded.

I have an older Dell Precision with 32GB of RAM and a Quadro P6000 with 24GB of vRAM.

I see messages in the terminal like this: setting up memoryefficientcrossattention. query dim is 640, context_dim is 2048 and using 10 heads. hundreds of them... followed by adm

I think it has to do with some settings in comfyUI or possibly xformers, but I can't be sure until I am able to set some runtime flags.

Is it possible to support --disable-cuda-malloc?

RuntimeError: CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSAto enable device-side assertions.

could not connect to display【ubuntu 22.04 】

when generating a picture, met following errror:

/root/anaconda3/envs/fooocus/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
0%| | 0/30 [00:00<?, ?it/s]/root/anaconda3/envs/fooocus/lib/python3.10/site-packages/torchsde/_brownian/brownian_interval.py:594: UserWarning: Should have tb<=t1 but got tb=14.614643096923828 and t1=14.614643.
warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")
3%|████ | 1/30 [00:03<01:42, 3.55s/it]qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/root/anaconda3/envs/fooocus/lib/python3.10/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb.

Error

Anyone can explain and have a solution for this..?
I have 16gb ram and 4gb vram rtx 3050ti laptop gpu
error

EDIT:
Problem solved!!!
I'm not sure what happened..

Support for full list of resolutions used to train SDXL models

It should be possible to pick in any of the resolutions used to train SDXL models, as described in Appendix I of SDXL paper:

Height Width Aspect Ratio
512 2048 0.25
512 1984 0.26
512 1920 0.27
512 1856 0.28
576 1792 0.32
576 1728 0.33
576 1664 0.35
640 1600 0.4
640 1536 0.42
704 1472 0.48
704 1408 0.5
704 1344 0.52
768 1344 0.57
768 1280 0.6
832 1216 0.68
832 1152 0.72
896 1152 0.78
896 1088 0.82
960 1088 0.88
960 1024 0.94
1024 1024 1.0
1024 960 1.07
1088 960 1.13
1088 896 1.21
1152 896 1.29
1152 832 1.38
1216 832 1.46
1280 768 1.67
1344 768 1.75
1408 704 2.0
1472 704 2.09
1536 640 2.4
1600 640 2.5
1664 576 2.89
1728 576 3.0
1792 576 3.11
1856 512 3.62
1920 512 3.75
1984 512 3.88
2048 512 4.0

Getting stuck on first run of launch.py

I am running this on an AWS EC2 g4dn instance (with GPU enabled) and Ubuntu 22.04 LTS

When I do python launch.py --listen it gets stuck at:

model_type EPS
adm 2560
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...

I have done the 'simple' installation process from the readme and all seemed to go well.

Attaching the full output in a text file.
launch-py--listen.txt

I run InvokeAI on the same instance type/setup without issue.

API call is not supported in the installed CUDA driver

return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

RuntimeError: CUDA error: API call is not supported in the installed CUDA driver
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

torch2.0.1+cu117

Option --port doesn't seem to do anything

I tried with both python launch.py --listen --port 3123 and without --listen.

Both open a port in 7860.

Python 3.10.12 (main, Jul  5 2023, 18:54:27) [GCC 11.2.0]
Fooocus version: 1.0.22
Inference Engine exists.
Inference Engine checkout finished.
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.

Also, --disable-auto-launch doesn't work, still launches a (text-based) browser.

Feature Request: UI Improvements

The current UI is not intuitive in some sections, mainly the visual previews of the advanced settings.

  • No Preview of Aspect Ratios
  • No Preview of Styles
  • No Preview of LoRas

Not enough on 4GB vRAM

Hi, unlike the instructions that says it works on 4GB VRAM, but i'm using GTX 850 which is 4GB vram and i got not enough memory!
image
image

7z adds friction with no other benefit

There's few reasons not to use zip - it's a good default unless you really benefit from the advantages of more recent compression formats. In this case the file is small, so why bother?

ModuleNotFoundError: No module named 'ffmpy'

Any idea what needs to be done?

C:\StableDiffusion>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 1.0.15
Inference Engine exists.
Inference Engine checkout finished.
Installing requirements
Traceback (most recent call last):
File "C:\StableDiffusion\Fooocus\entry_with_update.py", line 45, in
from launch import *
File "C:\StableDiffusion\Fooocus\launch.py", line 75, in
prepare_environment()
File "C:\StableDiffusion\Fooocus\launch.py", line 49, in prepare_environment
run_pip(f"install -r "{requirements_file}"", "requirements")
File "C:\StableDiffusion\Fooocus\modules\launch_util.py", line 94, in run_pip
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}",
File "C:\StableDiffusion\Fooocus\modules\launch_util.py", line 87, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements.
Command: "C:\StableDiffusion\python_embeded\python.exe" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1
stdout: Requirement already satisfied: torchsde==0.2.5 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from -r requirements_versions.txt (line 1)) (0.2.5)
Collecting einops==0.4.1 (from -r requirements_versions.txt (line 2))
Using cached einops-0.4.1-py3-none-any.whl (28 kB)
Collecting transformers==4.30.2 (from -r requirements_versions.txt (line 3))
Obtaining dependency information for transformers==4.30.2 from https://files.pythonhosted.org/packages/5b/0b/e45d26ccd28568013523e04f325432ea88a442b4e3020b757cf4361f0120/transformers-4.30.2-py3-none-any.whl.metadata
Using cached transformers-4.30.2-py3-none-any.whl.metadata (113 kB)
Requirement already satisfied: safetensors==0.3.1 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from -r requirements_versions.txt (line 4)) (0.3.1)
Collecting accelerate==0.21.0 (from -r requirements_versions.txt (line 5))
Obtaining dependency information for accelerate==0.21.0 from https://files.pythonhosted.org/packages/70/f9/c381bcdd0c3829d723aa14eec8e75c6c377b4ca61ec68b8093d9f35fc7a7/accelerate-0.21.0-py3-none-any.whl.metadata
Using cached accelerate-0.21.0-py3-none-any.whl.metadata (17 kB)
Requirement already satisfied: pyyaml==6.0 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from -r requirements_versions.txt (line 6)) (6.0)
Collecting Pillow==9.2.0 (from -r requirements_versions.txt (line 7))
Using cached Pillow-9.2.0-cp310-cp310-win_amd64.whl (3.3 MB)
Collecting scipy==1.9.3 (from -r requirements_versions.txt (line 8))
Using cached scipy-1.9.3-cp310-cp310-win_amd64.whl (40.1 MB)
Collecting tqdm==4.64.1 (from -r requirements_versions.txt (line 9))
Using cached tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
Collecting psutil==5.9.5 (from -r requirements_versions.txt (line 10))
Using cached psutil-5.9.5-cp36-abi3-win_amd64.whl (255 kB)
Requirement already satisfied: opencv-python==4.7.0.72 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from -r requirements_versions.txt (line 11)) (4.7.0.72)
Requirement already satisfied: numpy==1.23.5 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from -r requirements_versions.txt (line 12)) (1.23.5)
Collecting pytorch_lightning==1.9.4 (from -r requirements_versions.txt (line 13))
Using cached pytorch_lightning-1.9.4-py3-none-any.whl (827 kB)
Collecting omegaconf==2.2.3 (from -r requirements_versions.txt (line 14))
Using cached omegaconf-2.2.3-py3-none-any.whl (79 kB)
Collecting gradio==3.39.0 (from -r requirements_versions.txt (line 15))
Obtaining dependency information for gradio==3.39.0 from https://files.pythonhosted.org/packages/82/5f/c815ae438b63ca8b7418acf470369493bac5aa267192702c8601ca67966b/gradio-3.39.0-py3-none-any.whl.metadata
Using cached gradio-3.39.0-py3-none-any.whl.metadata (17 kB)
Requirement already satisfied: boltons>=20.2.1 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from torchsde==0.2.5->-r requirements_versions.txt (line 1)) (23.0.0)
Requirement already satisfied: torch>=1.6.0 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from torchsde==0.2.5->-r requirements_versions.txt (line 1)) (2.0.1+cu118)
Requirement already satisfied: trampoline>=0.1.2 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from torchsde==0.2.5->-r requirements_versions.txt (line 1)) (0.1.2)
Requirement already satisfied: filelock in c:\users\2700x\appdata\roaming\python\python310\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (3.9.0)
Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (0.16.4)
Requirement already satisfied: packaging>=20.0 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (23.0)
Requirement already satisfied: regex!=2019.12.17 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (2022.10.31)
Requirement already satisfied: requests in c:\users\2700x\appdata\roaming\python\python310\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (2.28.2)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (0.13.2)
Requirement already satisfied: colorama in c:\users\2700x\appdata\roaming\python\python310\site-packages (from tqdm==4.64.1->-r requirements_versions.txt (line 9)) (0.4.6)
Requirement already satisfied: fsspec[http]>2021.06.0 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 13)) (2023.6.0)
Requirement already satisfied: torchmetrics>=0.7.0 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 13)) (0.11.4)
Requirement already satisfied: typing-extensions>=4.0.0 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 13)) (4.5.0)
Collecting lightning-utilities>=0.6.0.post0 (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 13))
Obtaining dependency information for lightning-utilities>=0.6.0.post0 from https://files.pythonhosted.org/packages/46/ee/8641eeb6a062f383b7d6875604e1f3f83bd2c93a0b4dbcabd3150b32de6e/lightning_utilities-0.9.0-py3-none-any.whl.metadata
Using cached lightning_utilities-0.9.0-py3-none-any.whl.metadata (4.6 kB)
Requirement already satisfied: antlr4-python3-runtime==4.9.* in c:\users\2700x\appdata\roaming\python\python310\site-packages (from omegaconf==2.2.3->-r requirements_versions.txt (line 14)) (4.9.3)
Collecting aiofiles<24.0,>=22.0 (from gradio==3.39.0->-r requirements_versions.txt (line 15))
Obtaining dependency information for aiofiles<24.0,>=22.0 from https://files.pythonhosted.org/packages/c5/19/5af6804c4cc0fed83f47bff6e413a98a36618e7d40185cd36e69737f3b0e/aiofiles-23.2.1-py3-none-any.whl.metadata
Using cached aiofiles-23.2.1-py3-none-any.whl.metadata (9.7 kB)
Requirement already satisfied: aiohttp~=3.0 in c:\users\2700x\appdata\roaming\python\python310\site-packages (from gradio==3.39.0->-r requirements_versions.txt (line 15)) (3.8.4)
Collecting altair<6.0,>=4.2.0 (from gradio==3.39.0->-r requirements_versions.txt (line 15))
Obtaining dependency information for altair<6.0,>=4.2.0 from https://files.pythonhosted.org/packages/b2/20/5c3b89d6f8d9938325a9330793438389e0dc94c34d921f6da35ec62095f3/altair-5.0.1-py3-none-any.whl.metadata
Using cached altair-5.0.1-py3-none-any.whl.metadata (8.5 kB)
Requirement already satisfied: fastapi in c:\users\2700x\appdata\roaming\python\python310\site-packages (from gradio==3.39.0->-r requirements_versions.txt (line 15)) (0.88.0)
Collecting ffmpy (from gradio==3.39.0->-r requirements_versions.txt (line 15))
Using cached ffmpy-0.3.1.tar.gz (5.5 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'

stderr: error: subprocess-exited-with-error

python setup.py egg_info did not run successfully.
exit code: 1

[6 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "E:\tmp\pip-install-yqg7nw9y\ffmpy_5050bf8fc012433b9206adaf3d432812\setup.py", line 4, in
from ffmpy import version
ModuleNotFoundError: No module named 'ffmpy'
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

Encountered error while generating package metadata.

See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Linux: xFormers wasn't build with CUDA support

Linux installation went fine, but got "xFormers wasn't build with CUDA support"

env: Linux linux 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

Traceback (most recent call last):
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/alex/Fooocus/modules/async_worker.py", line 83, in worker
    handler(task)
  File "/home/alex/Fooocus/modules/async_worker.py", line 66, in handler
    imgs = pipeline.process(p_txt, n_txt, steps, switch, width, height, seed, callback=callback)
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/alex/Fooocus/modules/default_pipeline.py", line 141, in process
    sampled_latent = core.ksampler_with_refiner(
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/alex/Fooocus/modules/core.py", line 225, in ksampler_with_refiner
    samples = sampler.sample(noise, positive_copy, negative_copy, refiner_positive=refiner_positive_copy,
  File "/home/alex/Fooocus/modules/samplers_advanced.py", line 236, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(self.sampler))(self.model_k, noise, sigmas,
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/k_diffusion/sampling.py", line 644, in sample_dpmpp_2m_sde_gpu
    return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/k_diffusion/sampling.py", line 613, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/k_diffusion/external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/k_diffusion/external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 289, in sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/samplers.py", line 263, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/model_base.py", line 61, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 620, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 58, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 695, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 527, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/diffusionmodules/util.py", line 123, in checkpoint
    return func(*inputs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 590, in _forward
    n = self.attn1(n, context=context_attn1, value=value_attn1)
  File "/home/alex/miniconda3/envs/fooocus/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/alex/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/ldm/modules/attention.py", line 440, in forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
  File "/home/alex/.local/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 197, in memory_efficient_attention
    return _memory_efficient_attention(
  File "/home/alex/.local/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 293, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
  File "/home/alex/.local/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 309, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp)
  File "/home/alex/.local/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 95, in _dispatch_fw
    return _run_priority_list(
  File "/home/alex/.local/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 70, in _run_priority_list
    raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(20, 4080, 1, 64) (torch.float16)
     key         : shape=(20, 4080, 1, 64) (torch.float16)
     value       : shape=(20, 4080, 1, 64) (torch.float16)
     attn_bias   : <class 'NoneType'>
     p           : 0.0
`cutlassF` is not supported because:
    xFormers wasn't build with CUDA support
`flshattF` is not supported because:
    xFormers wasn't build with CUDA support
`tritonflashattF` is not supported because:
    xFormers wasn't build with CUDA support
    requires A100 GPU
`smallkF` is not supported because:
    xFormers wasn't build with CUDA support
    dtype=torch.float16 (supported: {torch.float32})
    max(query.shape[-1] != value.shape[-1]) > 32
    unsupported embed per head: 64

Image generation saving location

Can we have default image generation save location to app folder? I find it very inconvenient that it just save everything in gradio temp, if there some option that i'm missing please let me know.

Make Output window resizeable

Loving the ui, so simple to download and use. Great job with it.
One suggestion make the place the images come out resizeable, or have an option to fit all the output images into that image output area like in comfyui or a111 , not a fan of the scroll bar since you cant see all the images at once.
But once again love the work keep it up

Connection Error

Hi,
First, thanks for this project; it is excellent.

I have installed the project as described for a Windows 11 computer. Run the run.bat and generate my first image.

I can see the image being generated in the WebGUI, but then I get a "Connection Error", and the word error appears on the WebGUI.
At the same time, I get the following errors in the CMD box:-

image

When I refresh the web browser, the errors disappear. Then I can generate any images after this successfully, although I no longer see the preview image or the successfully generated images in the WebGUI.

This happens whenever I start the software and generate the first image on the Edge or Brave browsers.

Thanks

Feature Request: Metadata

Hi, I love the simple that is generate a image but I noticed that the images generated doesn't include the metadata in the image itself.

AMD support

After starting, Fooocus exists with a "RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx" error.

C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 1.0.15
Inference Engine exists.
Inference Engine checkout finished.
Traceback (most recent call last):
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\entry_with_update.py", line 45, in <module>
    from launch import *
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\launch.py", line 81, in <module>
    from webui import *
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\webui.py", line 6, in <module>
    from modules.default_pipeline import process
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\modules\default_pipeline.py", line 1, in <module>
    import modules.core as core
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\modules\core.py", line 8, in <module>
    import comfy.model_management
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_management.py", line 104, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\Fooocus\repositories\ComfyUI-from-StabilityAI-Official\comfy\model_management.py", line 74, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device
    _lazy_init()
  File "C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 247, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

C:\Users\Klaster_1\Downloads\Fooocus_win64_1-1-10>pause
Press any key to continue . . .

Running on Windows 11 and AMD Radeon RX 7900 XTX.

Preview image on every step

Would be awesome if we could have a preview of the image as it is getting generated.

Maybe also have a blur over the image?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.