Giter Site home page Giter Site logo

mashb1t / fooocus Goto Github PK

View Code? Open in Web Editor NEW

This project forked from lllyasviel/fooocus

121.0 3.0 23.0 57.11 MB

Focus even better on prompting and generating

License: GNU General Public License v3.0

JavaScript 2.88% Python 96.55% CSS 0.41% Jupyter Notebook 0.07% Dockerfile 0.06% Shell 0.03%

fooocus's Introduction

Fooocus - mashb1t's 1-Up Edition

The purpose of this fork is to add new features / fix bugs and contribute back to Fooocus.

As a collaborator & contributor of the Fooocus repository you can find me in almost every issue, pull request, discussion etc.

Sadly the creator of Fooocus has gone dark multiple times for an extended amount of time, which is why I took matters into my own hands.

BillsUghGIF

Additional features included in this fork:

(mostly a reflection of my PRs)

  • ✨ lllyasviel#958 - NSFW image censoring (config and UI)
  • πŸ› lllyasviel#981 - prevent users from skipping/stopping other users tasks in queue (multi-user capabilities) + rework advanced_parameters (removal + PID handling)
  • ✨ lllyasviel#985 - add list of 100 animals to wildcards
  • ✨ lllyasviel#1013 - add advanced parameter for disable_intermediate_results (progress_gallery, prevents UI lag when generation is too fast)
  • ✨ lllyasviel#1039 - add prompt translation
  • ✨ lllyasviel#1043 - add lcm realtime canvas painting (not merged to main in this repository)
  • ✨ lllyasviel#1167 - update model BluePencil XL v0.5 to v3.1.0
  • ✨ lllyasviel#1570 - add preset selection to Gradio UI (session based)
  • πŸ› lllyasviel#1578 - add workaround for changing prompt while generating
  • ✨ lllyasviel#1580 - add preset for SDXL Turbo (model DreamShaperXL_Turbo)
  • ✨ lllyasviel#1616 - add config setting for default_max_image_number
  • πŸ› lllyasviel#1668 - fix path_outputs directory creation if it doesn't exist
  • ✨ show more details for each performance setting, e.g. steps
  • ✨ add default_overwrite_step handling for meta data and gradio (allows turbo preset switching to set default_overwrite_step correctly)
  • ✨ lllyasviel#1762 - add style preview on mouseover
  • πŸ› lllyasviel#1784 - correctly sort files, display deepest directory level first
  • ✨ lllyasviel#1785 - update model Juggernaut XL v6 to v8
  • ✨ lllyasviel#1809 - reduce file size of preview images
  • ✨ lllyasviel#1932 - use consistent file name in gradio
  • ✨ lllyasviel#1863 - image extension support (png, jpg, webp)
  • ✨ lllyasviel#1938 - automatically describe image on uov image upload if prompt is empty
  • ✨ lllyasviel#1940 - meta data handling, schemes: Fooocus (json) and A1111 (plain text). Compatible with Civitai.
  • ✨ lllyasviel#1979 - prevent outdated history log link after midnight
  • ✨ lllyasviel#2032 - add inpaint mask generation functionality using rembg, incl. segmentation support
  • πŸ› lllyasviel#2332 - allow path_outputs to be outside of root dir
  • ✨ lllyasviel#2415 - add performance sdxl lightning (4 steps)
  • and many more (90+) are already merged, see my PRs

✨ = new feature
πŸ› = bugfix
abc = merged


Feature showcase

lllyasviel#2032 - Automated Mask Generation + Mask Prompting

299776543-696b97e8-bc05-4d52-86f6-0f1693a7dd25.mp4

Videos by @rayronvictor

Mask generation by cloth category

299493287-5a030d4e-280e-46cb-a8b1-50d264a70d2d.mp4

Mask generation by prompt

299776543-696b97e8-bc05-4d52-86f6-0f1693a7dd25.mp4


lllyasviel#1940 - Metadata Handling - Compatible with Civitai & A1111

This feature offers activatable metadata persistency in images for both a Fooocus (json) and A1111 (plain text) meta data scheme, where the latter is 100% compatible with A1111 and Civitai, but can not be used to reproduce the image outside of Fooocus, as there are so many improvements and special things happening in Fooocus it's just not applicable anywhere else.

  • Supports metadata for PNG (PngInfo) + JPG and WebP (both EXIF).
  • Save & restore configurations directly from images
  • You can also configure a copyright / creator tag

Screenshot 2024-01-29 at 15 13 17

Gradio (setting in Developer Debug Mode)

Default is Fooocus Scheme image

Config options

"default_save_metadata_to_images": true,
"default_metadata_scheme": "a1111",
"metadata_created_by": "mashb1t"

Arg --disable-metadata

--disable-metadata completely prevents metadata processing and output in Gradio

Metadata Reader

  1. open Image Input > Metadata tab
  2. drag & Drop image to image upload
  3. automatic preview of image metadata
  4. apply metadata to Gradio inputs on button click

Fooocus scheme Screenshot 2024-01-29 at 15 13 17

A1111 scheme Screenshot 2024-01-29 at 15 09 52

Metadata in files

Speed Fooocus scheme image

LCM A1111 scheme (yes, with negative prompt, because it technically exists but doesn't have an influence) image

Speed A1111 scheme image

Civitai

Speed Fooocus scheme image

LCM A1111 scheme image

image

Speed A1111 scheme image

image


Non-cherry-picked random batch by just typing two words "forest elf",

without any parameter tweaking, without any strange prompt tags.

See also non-cherry-picked generalization and diversity tests here and here and here and here.

In the entire open source community, only Fooocus can achieve this level of non-cherry-picked quality.

Fooocus

Fooocus is an image generating software (based on Gradio).

Fooocus is a rethinking of Stable Diffusion and Midjourney’s designs:

  • Learned from Stable Diffusion, the software is offline, open source, and free.

  • Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images.

Fooocus has included and automated lots of inner optimizations and quality improvements. Users can forget all those difficult technical parameters, and just enjoy the interaction between human and computer to "explore new mediums of thought and expanding the imaginative powers of the human species" [1].

Fooocus has simplified the installation. Between pressing "download" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia).

[1] David Holz, 2019.

Recently many fake websites exist on Google when you search β€œfooocus”. Do not trust those – here is the only official source of Fooocus.

Moving from Midjourney to Fooocus

Using Fooocus is as easy as (probably easier than) Midjourney – but this does not mean we lack functionality. Below are the details.

Midjourney Fooocus
High-quality text-to-image without needing much prompt engineering or parameter tuning.
(Unknown method)
High-quality text-to-image without needing much prompt engineering or parameter tuning.
(Fooocus has an offline GPT-2 based prompt processing engine and lots of sampling improvements so that results are always beautiful, no matter if your prompt is as short as β€œhouse in garden” or as long as 1000 words)
V1 V2 V3 V4 Input Image -> Upscale or Variation -> Vary (Subtle) / Vary (Strong)
U1 U2 U3 U4 Input Image -> Upscale or Variation -> Upscale (1.5x) / Upscale (2x)
Inpaint / Up / Down / Left / Right (Pan) Input Image -> Inpaint or Outpaint -> Inpaint / Up / Down / Left / Right
(Fooocus uses its own inpaint algorithm and inpaint models so that results are more satisfying than all other software that uses standard SDXL inpaint method/model)
Image Prompt Input Image -> Image Prompt
(Fooocus uses its own image prompt algorithm so that result quality and prompt understanding are more satisfying than all other software that uses standard SDXL methods like standard IP-Adapters or Revisions)
--style Advanced -> Style
--stylize Advanced -> Advanced -> Guidance
--niji Multiple launchers: "run.bat", "run_anime.bat", and "run_realistic.bat".
Fooocus support SDXL models on Civitai
(You can google search β€œCivitai” if you do not know about it)
--quality Advanced -> Quality
--repeat Advanced -> Image Number
Multi Prompts (::) Just use multiple lines of prompts
Prompt Weights You can use " I am (happy:1.5)".
Fooocus uses A1111's reweighting algorithm so that results are better than ComfyUI if users directly copy prompts from Civitai. (Because if prompts are written in ComfyUI's reweighting, users are less likely to copy prompt texts as they prefer dragging files)
To use embedding, you can use "(embedding:file_name:1.1)"
--no Advanced -> Negative Prompt
--ar Advanced -> Aspect Ratios
InsightFace Input Image -> Image Prompt -> Advanced -> FaceSwap
Describe Input Image -> Describe

We also have a few things borrowed from the best parts of LeonardoAI:

LeonardoAI Fooocus
Prompt Magic Advanced -> Style -> Fooocus V2
Advanced Sampler Parameters (like Contrast/Sharpness/etc) Advanced -> Advanced -> Sampling Sharpness / etc
User-friendly ControlNets Input Image -> Image Prompt -> Advanced

Fooocus also developed many "fooocus-only" features for advanced users to get perfect results. Click here to browse the advanced features.

Download

Windows

You can directly download Fooocus with:

>>> Click here to download <<<

After you download the file, please uncompress it and then run the "run.bat".

image

The first time you launch the software, it will automatically download models:

  1. It will download default models to the folder "Fooocus\models\checkpoints" given different presets. You can download them in advance if you do not want automatic download.
  2. Note that if you use inpaint, at the first time you inpaint an image, it will download Fooocus's own inpaint control model from here as the file "Fooocus\models\inpaint\inpaint_v26.fooocus.patch" (the size of this file is 1.28GB).

After Fooocus 2.1.60, you will also have run_anime.bat and run_realistic.bat. They are different model presets (and require different models, but they will be automatically downloaded). Check here for more details.

After Fooocus 2.3.0 you can also switch presets directly in the browser. Keep in mind to add these arguments if you want to change the default behavior:

  • Use --disable-preset-selection to disable preset selection in the browser.
  • Use --always-download-new-model to download missing models on preset switch. Default is fallback to previous_default_models defined in the corresponding preset, also see terminal output.

image

If you already have these files, you can copy them to the above locations to speed up installation.

Note that if you see "MetadataIncompleteBuffer" or "PytorchStreamReader", then your model files are corrupted. Please download models again.

Below is a test on a relatively low-end laptop with 16GB System RAM and 6GB VRAM (Nvidia 3060 laptop). The speed on this machine is about 1.35 seconds per iteration. Pretty impressive – nowadays laptops with 3060 are usually at very acceptable price.

image

Besides, recently many other software report that Nvidia driver above 532 is sometimes 10x slower than Nvidia driver 531. If your generation time is very long, consider download Nvidia Driver 531 Laptop or Nvidia Driver 531 Desktop.

Note that the minimal requirement is 4GB Nvidia GPU memory (4GB VRAM) and 8GB system memory (8GB RAM). This requires using Microsoft’s Virtual Swap technique, which is automatically enabled by your Windows installation in most cases, so you often do not need to do anything about it. However, if you are not sure, or if you manually turned it off (would anyone really do that?), or if you see any "RuntimeError: CPUAllocator", you can enable it here:

Click here to see the image instructions.

image

And make sure that you have at least 40GB free space on each drive if you still see "RuntimeError: CPUAllocator" !

Please open an issue if you use similar devices but still cannot achieve acceptable performances.

Note that the minimal requirement for different platforms is different.

See also the common problems and troubleshoots here.

Switching from Fooocus to this fork

  1. open a terminal in your Fooocus folder (the one with your config.txt)

  2. execute git status. You should see the following:

    On branch main
    Your branch is up to date with 'origin/main'.
    
    nothing to commit, working tree clean
    

    If not, execute git reset --hard origin/main and check git status again.

  3. execute

    git remote set-url origin https://github.com/mashb1t/Fooocus.git
    git reset --hard origin/main
    git pull
    
  4. activate your venv (not necessary when installed from 7z) and update your python packages depending on your environment (7z, venv, conda, etc.)

    Example for Windows (7z): ..\python_embeded\python.exe -m pip install -r "requirements_versions.txt"

  5. start Fooocus either by opening the run.bat or corresponding entrypoint (same as before)

OR

Windows: download the 7z file, extract it and run run.bat. You may want to copy over already downloaded checkpoints / LoRAs / etc.

Colab

(Last tested - 2024 Mar 18 by mashb1t)

Colab Info
Open In Colab Fooocus Official

In Colab, you can modify the last line to !python entry_with_update.py --share --always-high-vram or !python entry_with_update.py --share --always-high-vram --preset anime or !python entry_with_update.py --share --always-high-vram --preset realistic for Fooocus Default/Anime/Realistic Edition.

You can also change the preset in the UI. Please be aware that this may lead to timeouts after 60 seconds. If this is the case, please wait until the download has finished, change the preset to initial and back to the one you've selected or reload the page.

Note that this Colab will disable refiner by default because Colab free's resources are relatively limited (and some "big" features like image prompt may cause free-tier Colab to disconnect). We make sure that basic text-to-image is always working on free-tier Colab.

Using --always-high-vram shifts resource allocation from RAM to VRAM and achieves the overall best balance between performance, flexibility and stability on the default T4 instance. Please find more information here.

Thanks to camenduru for the template!

Linux (Using Anaconda)

If you want to use Anaconda/Miniconda, you can

git clone https://github.com/lllyasviel/Fooocus.git
cd Fooocus
conda env create -f environment.yaml
conda activate fooocus
pip install -r requirements_versions.txt

Then download the models: download default models to the folder "Fooocus\models\checkpoints". Or let Fooocus automatically download the models using the launcher:

conda activate fooocus
python entry_with_update.py

Or, if you want to open a remote port, use

conda activate fooocus
python entry_with_update.py --listen

Use python entry_with_update.py --preset anime or python entry_with_update.py --preset realistic for Fooocus Anime/Realistic Edition.

Linux (Using Python Venv)

Your Linux needs to have Python 3.10 installed, and let's say your Python can be called with the command python3 with your venv system working; you can

git clone https://github.com/lllyasviel/Fooocus.git
cd Fooocus
python3 -m venv fooocus_env
source fooocus_env/bin/activate
pip install -r requirements_versions.txt

See the above sections for model downloads. You can launch the software with:

source fooocus_env/bin/activate
python entry_with_update.py

Or, if you want to open a remote port, use

source fooocus_env/bin/activate
python entry_with_update.py --listen

Use python entry_with_update.py --preset anime or python entry_with_update.py --preset realistic for Fooocus Anime/Realistic Edition.

Linux (Using native system Python)

If you know what you are doing, and your Linux already has Python 3.10 installed, and your Python can be called with the command python3 (and Pip with pip3), you can

git clone https://github.com/lllyasviel/Fooocus.git
cd Fooocus
pip3 install -r requirements_versions.txt

See the above sections for model downloads. You can launch the software with:

python3 entry_with_update.py

Or, if you want to open a remote port, use

python3 entry_with_update.py --listen

Use python entry_with_update.py --preset anime or python entry_with_update.py --preset realistic for Fooocus Anime/Realistic Edition.

Linux (AMD GPUs)

Note that the minimal requirement for different platforms is different.

Same with the above instructions. You need to change torch to the AMD version

pip uninstall torch torchvision torchaudio torchtext functorch xformers 
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6

AMD is not intensively tested, however. The AMD support is in beta.

Use python entry_with_update.py --preset anime or python entry_with_update.py --preset realistic for Fooocus Anime/Realistic Edition.

Windows (AMD GPUs)

Note that the minimal requirement for different platforms is different.

Same with Windows. Download the software and edit the content of run.bat as:

.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install torch-directml
.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml
pause

Then run the run.bat.

AMD is not intensively tested, however. The AMD support is in beta.

For AMD, use .\python_embeded\python.exe entry_with_update.py --directml --preset anime or .\python_embeded\python.exe entry_with_update.py --directml --preset realistic for Fooocus Anime/Realistic Edition.

Mac

Note that the minimal requirement for different platforms is different.

Mac is not intensively tested. Below is an unofficial guideline for using Mac. You can discuss problems here.

You can install Fooocus on Apple Mac silicon (M1 or M2) with macOS 'Catalina' or a newer version. Fooocus runs on Apple silicon computers via PyTorch MPS device acceleration. Mac Silicon computers don't come with a dedicated graphics card, resulting in significantly longer image processing times compared to computers with dedicated graphics cards.

  1. Install the conda package manager and pytorch nightly. Read the Accelerated PyTorch training on Mac Apple Developer guide for instructions. Make sure pytorch recognizes your MPS device.
  2. Open the macOS Terminal app and clone this repository with git clone https://github.com/lllyasviel/Fooocus.git.
  3. Change to the new Fooocus directory, cd Fooocus.
  4. Create a new conda environment, conda env create -f environment.yaml.
  5. Activate your new conda environment, conda activate fooocus.
  6. Install the packages required by Fooocus, pip install -r requirements_versions.txt.
  7. Launch Fooocus by running python entry_with_update.py. (Some Mac M2 users may need python entry_with_update.py --disable-offload-from-vram to speed up model loading/unloading.) The first time you run Fooocus, it will automatically download the Stable Diffusion SDXL models and will take a significant amount of time, depending on your internet connection.

Use python entry_with_update.py --preset anime or python entry_with_update.py --preset realistic for Fooocus Anime/Realistic Edition.

Docker

See docker.md

Download Previous Version

See the guidelines here.

Minimal Requirement

Below is the minimal requirement for running Fooocus locally. If your device capability is lower than this spec, you may not be able to use Fooocus locally. (Please let us know, in any case, if your device capability is lower but Fooocus still works.)

Operating System GPU Minimal GPU Memory Minimal System Memory System Swap Note
Windows/Linux Nvidia RTX 4XXX 4GB 8GB Required fastest
Windows/Linux Nvidia RTX 3XXX 4GB 8GB Required usually faster than RTX 2XXX
Windows/Linux Nvidia RTX 2XXX 4GB 8GB Required usually faster than GTX 1XXX
Windows/Linux Nvidia GTX 1XXX 8GB (* 6GB uncertain) 8GB Required only marginally faster than CPU
Windows/Linux Nvidia GTX 9XX 8GB 8GB Required faster or slower than CPU
Windows/Linux Nvidia GTX < 9XX Not supported / / /
Windows AMD GPU 8GB (updated 2023 Dec 30) 8GB Required via DirectML (* ROCm is on hold), about 3x slower than Nvidia RTX 3XXX
Linux AMD GPU 8GB 8GB Required via ROCm, about 1.5x slower than Nvidia RTX 3XXX
Mac M1/M2 MPS Shared Shared Shared about 9x slower than Nvidia RTX 3XXX
Windows/Linux/Mac only use CPU 0GB 32GB Required about 17x slower than Nvidia RTX 3XXX

* AMD GPU ROCm (on hold): The AMD is still working on supporting ROCm on Windows.

* Nvidia GTX 1XXX 6GB uncertain: Some people report 6GB success on GTX 10XX, but some other people report failure cases.

Note that Fooocus is only for extremely high quality image generating. We will not support smaller models to reduce the requirement and sacrifice result quality.

Troubleshoot

See the common problems here.

Default Models

Given different goals, the default models and configs of Fooocus are different:

Task Windows Linux args Main Model Refiner Config
General run.bat juggernautXL_v8Rundiffusion not used here
Realistic run_realistic.bat --preset realistic realisticStockPhoto_v20 not used here
Anime run_anime.bat --preset anime animaPencilXL_v100 not used here

Note that the download is automatic - you do not need to do anything if the internet connection is okay. However, you can download them manually if you (or move them from somewhere else) have your own preparation.

UI Access and Authentication

In addition to running on localhost, Fooocus can also expose its UI in two ways:

  • Local UI listener: use --listen (specify port e.g. with --port 8888).
  • API access: use --share (registers an endpoint at .gradio.live).

In both ways the access is unauthenticated by default. You can add basic authentication by creating a file called auth.json in the main directory, which contains a list of JSON objects with the keys user and pass (see example in auth-example.json).

List of "Hidden" Tricks

The below things are already inside the software, and users do not need to do anything about these.

  1. GPT2-based prompt expansion as a dynamic style "Fooocus V2". (similar to Midjourney's hidden pre-processing and "raw" mode, or the LeonardoAI's Prompt Magic).
  2. Native refiner swap inside one single k-sampler. The advantage is that the refiner model can now reuse the base model's momentum (or ODE's history parameters) collected from k-sampling to achieve more coherent sampling. In Automatic1111's high-res fix and ComfyUI's node system, the base model and refiner use two independent k-samplers, which means the momentum is largely wasted, and the sampling continuity is broken. Fooocus uses its own advanced k-diffusion sampling that ensures seamless, native, and continuous swap in a refiner setup. (Update Aug 13: Actually, I discussed this with Automatic1111 several days ago, and it seems that the β€œnative refiner swap inside one single k-sampler” is merged into the dev branch of webui. Great!)
  3. Negative ADM guidance. Because the highest resolution level of XL Base does not have cross attentions, the positive and negative signals for XL's highest resolution level cannot receive enough contrasts during the CFG sampling, causing the results to look a bit plastic or overly smooth in certain cases. Fortunately, since the XL's highest resolution level is still conditioned on image aspect ratios (ADM), we can modify the adm on the positive/negative side to compensate for the lack of CFG contrast in the highest resolution level. (Update Aug 16, the IOS App Draw Things will support Negative ADM Guidance. Great!)
  4. We implemented a carefully tuned variation of Section 5.1 of "Improving Sample Quality of Diffusion Models Using Self-Attention Guidance". The weight is set to very low, but this is Fooocus's final guarantee to make sure that the XL will never yield an overly smooth or plastic appearance (examples here). This can almost eliminate all cases for which XL still occasionally produces overly smooth results, even with negative ADM guidance. (Update 2023 Aug 18, the Gaussian kernel of SAG is changed to an anisotropic kernel for better structure preservation and fewer artifacts.)
  5. We modified the style templates a bit and added the "cinematic-default".
  6. We tested the "sd_xl_offset_example-lora_1.0.safetensors" and it seems that when the lora weight is below 0.5, the results are always better than XL without lora.
  7. The parameters of samplers are carefully tuned.
  8. Because XL uses positional encoding for generation resolution, images generated by several fixed resolutions look a bit better than those from arbitrary resolutions (because the positional encoding is not very good at handling int numbers that are unseen during training). This suggests that the resolutions in UI may be hard coded for best results.
  9. Separated prompts for two different text encoders seem unnecessary. Separated prompts for the base model and refiner may work, but the effects are random, and we refrain from implementing this.
  10. The DPM family seems well-suited for XL since XL sometimes generates overly smooth texture, but the DPM family sometimes generates overly dense detail in texture. Their joint effect looks neutral and appealing to human perception.
  11. A carefully designed system for balancing multiple styles as well as prompt expansion.
  12. Using automatic1111's method to normalize prompt emphasizing. This significantly improves results when users directly copy prompts from civitai.
  13. The joint swap system of the refiner now also supports img2img and upscale in a seamless way.
  14. CFG Scale and TSNR correction (tuned for SDXL) when CFG is bigger than 10.

Customization

After the first time you run Fooocus, a config file will be generated at Fooocus\config.txt. This file can be edited to change the model path or default parameters.

For example, an edited Fooocus\config.txt (this file will be generated after the first launch) may look like this:

{
    "path_checkpoints": "D:\\Fooocus\\models\\checkpoints",
    "path_loras": "D:\\Fooocus\\models\\loras",
    "path_embeddings": "D:\\Fooocus\\models\\embeddings",
    "path_vae_approx": "D:\\Fooocus\\models\\vae_approx",
    "path_upscale_models": "D:\\Fooocus\\models\\upscale_models",
    "path_inpaint": "D:\\Fooocus\\models\\inpaint",
    "path_controlnet": "D:\\Fooocus\\models\\controlnet",
    "path_clip_vision": "D:\\Fooocus\\models\\clip_vision",
    "path_fooocus_expansion": "D:\\Fooocus\\models\\prompt_expansion\\fooocus_expansion",
    "path_outputs": "D:\\Fooocus\\outputs",
    "default_model": "realisticStockPhoto_v10.safetensors",
    "default_refiner": "",
    "default_loras": [["lora_filename_1.safetensors", 0.5], ["lora_filename_2.safetensors", 0.5]],
    "default_cfg_scale": 3.0,
    "default_sampler": "dpmpp_2m",
    "default_scheduler": "karras",
    "default_negative_prompt": "low quality",
    "default_positive_prompt": "",
    "default_styles": [
        "Fooocus V2",
        "Fooocus Photograph",
        "Fooocus Negative"
    ]
}

Many other keys, formats, and examples are in Fooocus\config_modification_tutorial.txt (this file will be generated after the first launch).

Consider twice before you really change the config. If you find yourself breaking things, just delete Fooocus\config.txt. Fooocus will go back to default.

A safer way is just to try "run_anime.bat" or "run_realistic.bat" - they should already be good enough for different tasks.

Note that user_path_config.txt is deprecated and will be removed soon. (Edit: it is already removed.)

All CMD Flags

entry_with_update.py  [-h] [--listen [IP]] [--port PORT]
                      [--disable-header-check [ORIGIN]]
                      [--web-upload-size WEB_UPLOAD_SIZE]
                      [--hf-mirror HF_MIRROR]
                      [--external-working-path PATH [PATH ...]]
                      [--output-path OUTPUT_PATH]
                      [--temp-path TEMP_PATH]
                      [--cache-path CACHE_PATH] [--in-browser]
                      [--disable-in-browser]
                      [--gpu-device-id DEVICE_ID]
                      [--async-cuda-allocation | --disable-async-cuda-allocation]
                      [--disable-attention-upcast]
                      [--all-in-fp32 | --all-in-fp16]
                      [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]
                      [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16]   
                      [--vae-in-cpu]
                      [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32]
                      [--directml [DIRECTML_DEVICE]]
                      [--disable-ipex-hijack]
                      [--preview-option [none,auto,fast,taesd]]
                      [--attention-split | --attention-quad | --attention-pytorch]
                      [--disable-xformers]
                      [--always-gpu | --always-high-vram | --always-normal-vram |
                      --always-low-vram | --always-no-vram | --always-cpu [CPU_NUM_THREADS]]
                      [--always-offload-from-vram]
                      [--pytorch-deterministic] [--disable-server-log]  
                      [--debug-mode] [--is-windows-embedded-python]     
                      [--disable-server-info] [--multi-user] [--share]  
                      [--preset PRESET] [--disable-preset-selection]    
                      [--language LANGUAGE]
                      [--disable-offload-from-vram] [--theme THEME]     
                      [--disable-image-log] [--disable-analytics]       
                      [--disable-metadata] [--disable-preset-download]  
                      [--enable-describe-uov-image]
                      [--always-download-new-model]

Advanced Features

Click here to browse the advanced features.

Fooocus also has many community forks, just like SD-WebUI's vladmandic/automatic and anapnoe/stable-diffusion-webui-ux, for enthusiastic users who want to try!

Fooocus' forks
fenneishi/Fooocus-Control
runew0lf/RuinedFooocus
MoonRide303/Fooocus-MRE
metercai/SimpleSDXL
and so on ...

See also About Forking and Promotion of Forks.

Thanks

Special thanks to twri and 3Diva and Marc K3nt3L for creating additional SDXL styles available in Fooocus. Thanks daswer123 for contributing the Canvas Zoom!

Update Log

The log is here.

Localization/Translation/I18N

We need your help! Please help translate Fooocus into international languages.

You can put json files in the language folder to translate the user interface.

For example, below is the content of Fooocus/language/example.json:

{
  "Generate": "η”Ÿζˆ",
  "Input Image": "ε…₯εŠ›η”»εƒ",
  "Advanced": "κ³ κΈ‰",
  "SAI 3D Model": "SAI 3D Modèle"
}

If you add --language example arg, Fooocus will read Fooocus/language/example.json to translate the UI.

For example, you can edit the ending line of Windows run.bat as

.\python_embeded\python.exe -s Fooocus\entry_with_update.py --language example

Or run_anime.bat as

.\python_embeded\python.exe -s Fooocus\entry_with_update.py --language example --preset anime

Or run_realistic.bat as

.\python_embeded\python.exe -s Fooocus\entry_with_update.py --language example --preset realistic

For practical translation, you may create your own file like Fooocus/language/jp.json or Fooocus/language/cn.json and then use flag --language jp or --language cn. Apparently, these files do not exist now. We need your help to create these files!

Note that if no --language is given and at the same time Fooocus/language/default.json exists, Fooocus will always load Fooocus/language/default.json for translation. By default, the file Fooocus/language/default.json does not exist.

fooocus's People

Contributors

alexdnk avatar blckbx avatar camenduru avatar cantor-set avatar chenxinlong avatar cocktailpeanut avatar crohrer avatar daswer123 avatar docppp avatar dooglewoogle avatar e52fa787 avatar eddyizm avatar hisk2323 avatar hswlab avatar hydra213 avatar josephrocca avatar khanvilkarvishvesh avatar lllyasviel avatar mashb1t avatar mindofmatter avatar moonride303 avatar rayronvictor avatar rsl8 avatar shinshin86 avatar v1sionverse avatar wari-dudafa avatar xhoxye avatar xynydev avatar zaldos avatar zxilly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fooocus's Issues

[Bug]: preset selection terminated on run

Prerequisites

Describe the problem

When changing the preset dynamically during runtime, the application initiates the download of the corresponding model. However, even after the download is complete, the application does not effectively switch to the newly downloaded model. This requires an additional switch back to the initial preset and then to the desired preset for the changes to take effect. and on image generation it terminates as ^C but work when start the preset as argument.

Full console log output

Refiner unloaded.
model_type: EPS
UNet ADM Dimension: 2816
Loaded preset: /content/Fooocus/presets/anime.json
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra: {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus  V1 Expansion: Vocab with 642 words.
Fooocus  Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.67 seconds
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860 or https://ff893b401dffdffdfd.gradio.live
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 7.0
[Parameters] Seed = 1417267429763820378
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
model_type: EPS
UNet ADM Dimension: 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra: {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Refiner model loaded: /content/Fooocus/models/checkpoints/DreamShaper_8_pruned.safetensors
model_type: EPS
UNet ADM Dimension: 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra: {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: /content/Fooocus/models/checkpoints/BluePencilXL_v050.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.5], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/BluePencilXL_v050.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/BluePencilXL_v050.safetensors] with 788 keys at weight 0.5.
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.5], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/DreamShaper_8_pruned.safetensors].
Requested to load SDXLClipModel
Loading 1 new model
^C

Version

Fooocus 2.1.859

Where are you running Fooocus?

Cloud (other)

Operating System

No response

What browsers are you seeing the problem on?

Chrome

[Bug]: Unable to change to Turbo mode.

Prerequisites

Describe the problem

image
image

Full console log output

Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Fooocus version: 2.1.857
Installing requirements
Downloading: "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_version6Rundiffusion.safetensors" to /content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors

100% 6.62G/6.62G [00:32<00:00, 220MB/s]
Downloading: "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors" to /content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors

100% 47.3M/47.3M [00:00<00:00, 56.0MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth" to /content/Fooocus/models/vae_approx/xlvaeapp.pth

100% 209k/209k [00:00<00:00, 7.50MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt" to /content/Fooocus/models/vae_approx/vaeapp_sd15.pth

100% 209k/209k [00:00<00:00, 7.78MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors" to /content/Fooocus/models/vae_approx/xl-to-v1_interposer-v3.1.safetensors

100% 6.25M/6.25M [00:00<00:00, 80.3MB/s]
Downloading: "https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin" to /content/Fooocus/models/prompt_expansion/fooocus_expansion/pytorch_model.bin

100% 335M/335M [00:01<00:00, 241MB/s]
Running on local URL:  http://127.0.0.1:7865
Total VRAM 15102 MB, total RAM 12979 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Using pytorch cross attention
2023-12-31 08:42:11.649385: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-12-31 08:42:11.649492: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-12-31 08:42:11.788376: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-12-31 08:42:14.227482: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Refiner unloaded.
Running on public URL: https://b5393dcc0c9d3bdd48.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/content/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.60 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://b5393dcc0c9d3bdd48.gradio.live
Loaded preset: /content/Fooocus/presets/turbo.json
Downloading: "https://huggingface.co/Lykon/dreamshaper-xl-turbo/resolve/main/DreamShaperXL_Turbo_dpmppSdeKarras_half_pruned_6.safetensors" to /content/Fooocus/models/checkpoints/DreamShaperXL_Turbo_dpmppSdeKarras_half_pruned_6.safetensors

100% 6.46G/6.46G [00:54<00:00, 127MB/s] 
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "/content/Fooocus/webui.py", line 517, in preset_selection_change
    return modules.meta_parser.load_parameter_button_click(json.dumps(preset_prepared))
TypeError: load_parameter_button_click() missing 1 required positional argument: 'is_generating'

Version

2.1.857

Where are you running Fooocus?

Cloud (Gradio)

Operating System

Windows 10

What browsers are you seeing the problem on?

Chrome

[Bug]: juggernautXL_v8Rundiffusion does not download; Always getting 'Couldn't install requirements'; Stuck on 'Waiting for task to start ...'

Prerequisites

Describe the problem

I am not sure I'm using this fork correctly, so let me tell you how I've "installed" it:

  • I have downloaded the fork which is just a Fooocus directory
    image

  • I have copied the python_embeded directory and the 3 bat runners alongside the Fooocus folder:
    image

The main repository comes with juggernautXL_version6Rundiffusion. This one tries to download juggernautXL_v8Rundiffusion.safetensors so I can't really replace them.

Full console log output

Every time I execute run.bat I get:

C:\Fooocus-fork>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.862
Installing requirements
Couldn't install requirements.
Command: "C:\Fooocus-fork\python_embeded\python.exe" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1

then a bunch of Requirement already satisfied:
and finally

CMD Failed requirements: install -r "requirements_versions.txt"
Downloading: "https://civitai.com/api/download/models/288982" to C:\Fooocus-fork\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors

120kB [00:00, 1.70MB/s]
Total VRAM 8192 MB, total RAM 16311 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1080 : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
Exception in thread Thread-5 (worker):
Traceback (most recent call last):
  File "C:\Fooocus-fork\Fooocus\modules\patch.py", line 465, in loader
    result = original_loader(*args, **kwargs)
  File "C:\Fooocus-fork\python_embeded\lib\site-packages\safetensors\torch.py", line 259, in load_file
    with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "C:\Fooocus-fork\Fooocus\modules\async_worker.py", line 33, in worker
    import modules.default_pipeline as pipeline
  File "C:\Fooocus-fork\Fooocus\modules\default_pipeline.py", line 253, in <module>
    refresh_everything(
  File "C:\Fooocus-fork\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Fooocus-fork\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Fooocus-fork\Fooocus\modules\default_pipeline.py", line 233, in refresh_everything
    refresh_base_model(base_model_name)
  File "C:\Fooocus-fork\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Fooocus-fork\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Fooocus-fork\Fooocus\modules\default_pipeline.py", line 69, in refresh_base_model
    model_base = core.load_model(filename)
  File "C:\Fooocus-fork\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Fooocus-fork\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Fooocus-fork\Fooocus\modules\core.py", line 145, in load_model
    unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=path_embeddings)
  File "C:\Fooocus-fork\Fooocus\ldm_patched\modules\sd.py", line 427, in load_checkpoint_guess_config
    sd = ldm_patched.modules.utils.load_torch_file(ckpt_path)
  File "C:\Fooocus-fork\Fooocus\ldm_patched\modules\utils.py", line 13, in load_torch_file
    sd = safetensors.torch.load_file(ckpt, device=device.type)
  File "C:\Fooocus-fork\Fooocus\modules\patch.py", line 481, in loader
    raise ValueError(exp)
ValueError: Error while deserializing header: HeaderTooLarge
File corrupted: C:\Fooocus-fork\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Fooocus has tried to move the corrupted file to C:\Fooocus-fork\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors.corrupted
You may try again now and Fooocus will download models again.

Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.

Version

2.1.862

Where are you running Fooocus?

Locally

Operating System

Windows 10

What browsers are you seeing the problem on?

Chrome

[Bug]: --disable-preset-download causes a python error

Prerequisites

Describe the problem

I'm running your fork and I encountered an issue with a preset I had made. It looked like something with the model download, so I checked an unmodified preset that I had the model for. It worked until I added the --disable-preset-download option to the batch file. It's not a big deal to remove the option, but I did want to report it. I haven't used Fooocus in a few days so all I can say is that it worked a week ago.
Thanks for your fork and all of your hard work on the main branch too!

Full console log output

.\run_anime.bat

A:\ai_stuff\programs\Fooocus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset anime --disable-preset-download
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--preset', 'anime', '--disable-preset-download']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.865 (mashb1t)
Loaded preset: A:\ai_stuff\programs\Fooocus\Fooocus\presets\anime.json
Skipped model download.
Traceback (most recent call last):
  File "A:\ai_stuff\programs\Fooocus\Fooocus\entry_with_update.py", line 46, in <module>
    from launch import *
  File "A:\ai_stuff\programs\Fooocus\Fooocus\launch.py", line 124, in <module>
    config.default_base_model_name, config.checkpoint_downloads = download_models(
TypeError: cannot unpack non-iterable NoneType object

Version

2.1.865 (mashb1t)

Where are you running Fooocus?

Locally

Operating System

Windows 11

What browsers are you seeing the problem on?

Firefox

[Bug]: problem with upscaling images

Prerequisites

Describe the problem

[Bug]: problem with upscaling images colab

Full console log output

Image upscaled.
Traceback (most recent call last):
  File "/content/Fooocus/modules/async_worker.py", line 892, in worker
    handler(task)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/Fooocus/modules/async_worker.py", line 555, in handler
    uov_input_image_path = log(uov_input_image, d, output_format=output_format)
  File "/content/Fooocus/modules/private_logger.py", line 110, in log
    for label, key, value in metadata:
ValueError: not enough values to unpack (expected 3, got 2)
Total time: 15.53 seconds

Version

latest

Where are you running Fooocus?

Locally

Operating System

No response

What browsers are you seeing the problem on?

Microsoft Edge

[Bug]: There is no lcm realtime canvas painting tab

Prerequisites

Describe the problem

Hey! Thank you for you work and this fork!
The only question is that I can't find the tab for lcm realtime canvas painting.
You have this featutre in a list of enhancements/fixed bugs, but you have no example with some screenshots (similar like you have for metadata feature or generating mask for inpaint).

So maybe do I need to do some extra step after installing your fork to enable this feature?
Thanks in advance

Full console log output

D:\Fooocus_mashb1t_win64_2-1-864>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --listen --always-normal-vram --theme dark --preview-option fast
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--listen', '--always-normal-vram', '--theme', 'dark', '--preview-option', 'fast']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Total VRAM 8192 MB, total RAM 14188 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3070 Laptop GPU : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://0.0.0.0:7865
model_type EPS
UNet ADM Dimension 2816

To create a public link, set `share=True` in `launch()`.
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.85 seconds
Started worker with PID 38792
App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865

Version

Fooocus 2.1.864

Where are you running Fooocus?

Locally

Operating System

Windows 11

What browsers are you seeing the problem on?

Chrome

[Feature]: Number of steps in log

Prerequisites

  • I have checked that this is not a duplicate of an already existing feature request

Is your feature request related to a problem? Please describe.

Sometimes I do tests with specific steps by adjusting the debug value "Forced Overwrite of Sampling Step".

Describe the idea you'd like

The number of steps should show up in the History Log. Or, more specifically, the value of the debug setting "Forced Overwrite of Sampling Step"

[Bug]: No module name 'diffusers'

Prerequisites

Describe the problem

  1. I did: git clone https://github.com/mashb1t/Fooocus.git
  2. I copied the python_embeded directory and run.bat from the current version of fooocus
  3. I copy my config.txt from fooocus to this fooocus directory
  4. when I run run.bat file, I see the error noted below

Full console log output

D:\AI-Art\Fooocus-mashb1t>run.bat

D:\AI-Art\Fooocus-mashb1t>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Error checking version for diffusers: No package metadata was found for diffusers
Installing requirements
Couldn't install requirements.
Command: "D:\AI-Art\Fooocus-mashb1t\python_embeded\python.exe" -m pip install -r "requirements_versions.txt" --prefer-binary
Error code: 1
stdout: Requirement already satisfied: torchsde==0.2.5 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 1)) (0.2.5)
Requirement already satisfied: einops==0.4.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 2)) (0.4.1)
Requirement already satisfied: transformers==4.30.2 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 3)) (4.30.2)
Requirement already satisfied: safetensors==0.3.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 4)) (0.3.1)
Requirement already satisfied: accelerate==0.21.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 5)) (0.21.0)
Requirement already satisfied: pyyaml==6.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 6)) (6.0)
Requirement already satisfied: Pillow==9.2.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 7)) (9.2.0)
Requirement already satisfied: scipy==1.9.3 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 8)) (1.9.3)
Requirement already satisfied: tqdm==4.65.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 9)) (4.65.0)
Requirement already satisfied: psutil==5.9.5 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 10)) (5.9.5)
Requirement already satisfied: pytorch_lightning==1.9.4 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 11)) (1.9.4)
Requirement already satisfied: omegaconf==2.2.3 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 12)) (2.2.3)
Requirement already satisfied: gradio==3.41.2 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 13)) (3.41.2)
Requirement already satisfied: pygit2==1.12.2 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 14)) (1.12.2)
Requirement already satisfied: opencv-contrib-python==4.8.0.74 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 15)) (4.8.0.74)
Collecting diffusers==0.25.1 (from -r requirements_versions.txt (line 16))
  Obtaining dependency information for diffusers==0.25.1 from https://files.pythonhosted.org/packages/e4/c6/1f9768606c937e71c4d391307f395942c42d5567f538712dbf37b0cc0917/diffusers-0.25.1-py3-none-any.whl.metadata
  Using cached diffusers-0.25.1-py3-none-any.whl.metadata (19 kB)
Requirement already satisfied: httpx==0.24.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 17)) (0.24.1)
Requirement already satisfied: onnxruntime==1.16.3 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 18)) (1.16.3)
Requirement already satisfied: timm==0.9.2 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from -r requirements_versions.txt (line 19)) (0.9.2)
Collecting translators==5.8.9 (from -r requirements_versions.txt (line 20))
  Obtaining dependency information for translators==5.8.9 from https://files.pythonhosted.org/packages/25/68/9334a80ec0f54b294f1f4dee50494e6a21c5badc1a8e11270037bc177d88/translators-5.8.9-py3-none-any.whl.metadata
  Using cached translators-5.8.9-py3-none-any.whl.metadata (68 kB)
Collecting rembg==2.0.53 (from -r requirements_versions.txt (line 21))
  Obtaining dependency information for rembg==2.0.53 from https://files.pythonhosted.org/packages/55/6e/5a336d1308105fbe2a9738e7b99e79549628e80595d62142c4334e319b67/rembg-2.0.53-py3-none-any.whl.metadata
  Using cached rembg-2.0.53-py3-none-any.whl.metadata (14 kB)
Collecting groundingdino-py==0.4.0 (from -r requirements_versions.txt (line 22))
  Using cached groundingdino_py-0.4.0-py2.py3-none-any.whl
Requirement already satisfied: boltons>=20.2.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from torchsde==0.2.5->-r requirements_versions.txt (line 1)) (23.0.0)
Requirement already satisfied: torch>=1.6.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from torchsde==0.2.5->-r requirements_versions.txt (line 1)) (2.1.0+cu121)
Requirement already satisfied: trampoline>=0.1.2 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from torchsde==0.2.5->-r requirements_versions.txt (line 1)) (0.1.2)
Requirement already satisfied: numpy>=1.19.* in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from torchsde==0.2.5->-r requirements_versions.txt (line 1)) (1.23.5)
Requirement already satisfied: filelock in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (3.12.2)
Requirement already satisfied: huggingface-hub<1.0,>=0.14.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (0.15.1)
Requirement already satisfied: packaging>=20.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (23.1)
Requirement already satisfied: regex!=2019.12.17 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (2023.6.3)
Requirement already satisfied: requests in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (2.31.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from transformers==4.30.2->-r requirements_versions.txt (line 3)) (0.13.3)
Requirement already satisfied: colorama in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from tqdm==4.65.0->-r requirements_versions.txt (line 9)) (0.4.6)
Requirement already satisfied: fsspec[http]>2021.06.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (2023.6.0)
Requirement already satisfied: torchmetrics>=0.7.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (1.0.3)
Requirement already satisfied: typing-extensions>=4.0.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (4.7.1)
Requirement already satisfied: lightning-utilities>=0.6.0.post0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (0.9.0)
Requirement already satisfied: antlr4-python3-runtime==4.9.* in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from omegaconf==2.2.3->-r requirements_versions.txt (line 12)) (4.9.3)
Requirement already satisfied: aiofiles<24.0,>=22.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (23.2.1)
Requirement already satisfied: altair<6.0,>=4.2.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (5.0.1)
Requirement already satisfied: fastapi in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.101.0)
Requirement already satisfied: ffmpy in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.3.1)
Requirement already satisfied: gradio-client==0.5.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.5.0)
Collecting importlib-resources<7.0,>=1.3 (from gradio==3.41.2->-r requirements_versions.txt (line 13))
  Obtaining dependency information for importlib-resources<7.0,>=1.3 from https://files.pythonhosted.org/packages/93/e8/facde510585869b5ec694e8e0363ffe4eba067cb357a8398a55f6a1f8023/importlib_resources-6.1.1-py3-none-any.whl.metadata
  Using cached importlib_resources-6.1.1-py3-none-any.whl.metadata (4.1 kB)
Requirement already satisfied: jinja2<4.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (3.1.2)
Requirement already satisfied: markupsafe~=2.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (2.1.3)
Requirement already satisfied: matplotlib~=3.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (3.7.2)
Requirement already satisfied: orjson~=3.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (3.9.4)
Requirement already satisfied: pandas<3.0,>=1.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (2.0.3)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,<3.0.0,>=1.7.4 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (2.1.1)
Requirement already satisfied: pydub in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.25.1)
Requirement already satisfied: python-multipart in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.0.6)
Requirement already satisfied: semantic-version~=2.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (2.10.0)
Requirement already satisfied: uvicorn>=0.14.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.23.2)
Requirement already satisfied: websockets<12.0,>=10.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from gradio==3.41.2->-r requirements_versions.txt (line 13)) (11.0.3)
Requirement already satisfied: cffi>=1.9.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pygit2==1.12.2->-r requirements_versions.txt (line 14)) (1.15.1)
Collecting importlib-metadata (from diffusers==0.25.1->-r requirements_versions.txt (line 16))
  Obtaining dependency information for importlib-metadata from https://files.pythonhosted.org/packages/c0/8b/d8427f023c081a8303e6ac7209c16e6878f2765d5b59667f3903fbcfd365/importlib_metadata-7.0.1-py3-none-any.whl.metadata
  Using cached importlib_metadata-7.0.1-py3-none-any.whl.metadata (4.9 kB)
Collecting huggingface-hub<1.0,>=0.14.1 (from transformers==4.30.2->-r requirements_versions.txt (line 3))
  Obtaining dependency information for huggingface-hub<1.0,>=0.14.1 from https://files.pythonhosted.org/packages/28/03/7d3c7153113ec59cfb31e3b8ee773f5f420a0dd7d26d40442542b96675c3/huggingface_hub-0.20.3-py3-none-any.whl.metadata
  Using cached huggingface_hub-0.20.3-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: certifi in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from httpx==0.24.1->-r requirements_versions.txt (line 17)) (2023.5.7)
Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from httpx==0.24.1->-r requirements_versions.txt (line 17)) (0.17.3)
Requirement already satisfied: idna in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from httpx==0.24.1->-r requirements_versions.txt (line 17)) (3.4)
Requirement already satisfied: sniffio in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from httpx==0.24.1->-r requirements_versions.txt (line 17)) (1.3.0)
Requirement already satisfied: coloredlogs in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from onnxruntime==1.16.3->-r requirements_versions.txt (line 18)) (15.0.1)
Requirement already satisfied: flatbuffers in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from onnxruntime==1.16.3->-r requirements_versions.txt (line 18)) (23.5.26)
Requirement already satisfied: protobuf in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from onnxruntime==1.16.3->-r requirements_versions.txt (line 18)) (4.25.2)
Requirement already satisfied: sympy in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from onnxruntime==1.16.3->-r requirements_versions.txt (line 18)) (1.12)
Requirement already satisfied: torchvision in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from timm==0.9.2->-r requirements_versions.txt (line 19)) (0.16.0+cu121)
Requirement already satisfied: PyExecJS>=1.5.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from translators==5.8.9->-r requirements_versions.txt (line 20)) (1.5.1)
Collecting lxml>=4.9.1 (from translators==5.8.9->-r requirements_versions.txt (line 20))
  Obtaining dependency information for lxml>=4.9.1 from https://files.pythonhosted.org/packages/dc/3d/53c664318c9ab5bfbe9b9b3b6be0d04a2fa161f4cd35d731d27a0f754253/lxml-5.1.0-cp310-cp310-win_amd64.whl.metadata
  Using cached lxml-5.1.0-cp310-cp310-win_amd64.whl.metadata (3.6 kB)
Collecting pathos>=0.2.9 (from translators==5.8.9->-r requirements_versions.txt (line 20))
  Obtaining dependency information for pathos>=0.2.9 from https://files.pythonhosted.org/packages/f4/7f/cea34872c000d17972dad998575d14656d7c6bcf1a08a8d66d73c1ef2cca/pathos-0.3.2-py3-none-any.whl.metadata
  Using cached pathos-0.3.2-py3-none-any.whl.metadata (11 kB)
Collecting cryptography>=38.0.1 (from translators==5.8.9->-r requirements_versions.txt (line 20))
  Obtaining dependency information for cryptography>=38.0.1 from https://files.pythonhosted.org/packages/6d/3d/9ede0d37439a16263070739fbe2df7b44017696a685512fb1d379069ab6c/cryptography-42.0.2-cp39-abi3-win_amd64.whl.metadata
  Using cached cryptography-42.0.2-cp39-abi3-win_amd64.whl.metadata (5.4 kB)
Requirement already satisfied: jsonschema in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from rembg==2.0.53->-r requirements_versions.txt (line 21)) (4.19.0)
Collecting opencv-python-headless (from rembg==2.0.53->-r requirements_versions.txt (line 21))
  Obtaining dependency information for opencv-python-headless from https://files.pythonhosted.org/packages/20/44/458a0a135866f5e08266566b32ad9a182a7a059a894effe6c41a9c841ff1/opencv_python_headless-4.9.0.80-cp37-abi3-win_amd64.whl.metadata
  Using cached opencv_python_headless-4.9.0.80-cp37-abi3-win_amd64.whl.metadata (20 kB)
Collecting pooch (from rembg==2.0.53->-r requirements_versions.txt (line 21))
  Obtaining dependency information for pooch from https://files.pythonhosted.org/packages/1a/a5/5174dac3957ac412e80a00f30b6507031fcab7000afc9ea0ac413bddcff2/pooch-1.8.0-py3-none-any.whl.metadata
  Using cached pooch-1.8.0-py3-none-any.whl.metadata (9.9 kB)
Collecting pymatting (from rembg==2.0.53->-r requirements_versions.txt (line 21))
  Obtaining dependency information for pymatting from https://files.pythonhosted.org/packages/46/aa/d7ff530c33c654263d8775ceb50a73f636fc65edbdf09e102c47a3be391b/PyMatting-1.1.12-py3-none-any.whl.metadata
  Using cached PyMatting-1.1.12-py3-none-any.whl.metadata (7.4 kB)
Collecting scikit-image (from rembg==2.0.53->-r requirements_versions.txt (line 21))
  Obtaining dependency information for scikit-image from https://files.pythonhosted.org/packages/86/f0/18895318109f9b508f2310f136922e455a453550826a8240b412063c2528/scikit_image-0.22.0-cp310-cp310-win_amd64.whl.metadata
  Using cached scikit_image-0.22.0-cp310-cp310-win_amd64.whl.metadata (13 kB)
Requirement already satisfied: addict in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from groundingdino-py==0.4.0->-r requirements_versions.txt (line 22)) (2.4.0)
Collecting yapf (from groundingdino-py==0.4.0->-r requirements_versions.txt (line 22))
  Obtaining dependency information for yapf from https://files.pythonhosted.org/packages/66/c9/d4b03b2490107f13ebd68fe9496d41ae41a7de6275ead56d0d4621b11ffd/yapf-0.40.2-py3-none-any.whl.metadata
  Using cached yapf-0.40.2-py3-none-any.whl.metadata (45 kB)
Requirement already satisfied: opencv-python in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from groundingdino-py==0.4.0->-r requirements_versions.txt (line 22)) (4.8.1.78)
Collecting supervision==0.6.0 (from groundingdino-py==0.4.0->-r requirements_versions.txt (line 22))
  Using cached supervision-0.6.0-py3-none-any.whl (31 kB)
Collecting pycocotools (from groundingdino-py==0.4.0->-r requirements_versions.txt (line 22))
  Obtaining dependency information for pycocotools from https://files.pythonhosted.org/packages/6e/03/66168e1940ad0ea745cc6489fd1bd9b8c296b20a7c0102bd329382880659/pycocotools-2.0.7-cp310-cp310-win_amd64.whl.metadata
  Using cached pycocotools-2.0.7-cp310-cp310-win_amd64.whl.metadata (1.1 kB)
Requirement already satisfied: toolz in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from altair<6.0,>=4.2.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.12.0)
Requirement already satisfied: pycparser in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from cffi>=1.9.1->pygit2==1.12.2->-r requirements_versions.txt (line 14)) (2.21)
Requirement already satisfied: aiohttp!=4.0.0a0,!=4.0.0a1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (3.8.4)
Requirement already satisfied: h11<0.15,>=0.13 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from httpcore<0.18.0,>=0.15.0->httpx==0.24.1->-r requirements_versions.txt (line 17)) (0.14.0)
Requirement already satisfied: anyio<5.0,>=3.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from httpcore<0.18.0,>=0.15.0->httpx==0.24.1->-r requirements_versions.txt (line 17)) (3.7.1)
Requirement already satisfied: attrs>=22.2.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from jsonschema->rembg==2.0.53->-r requirements_versions.txt (line 21)) (23.1.0)
Requirement already satisfied: jsonschema-specifications>=2023.03.6 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from jsonschema->rembg==2.0.53->-r requirements_versions.txt (line 21)) (2023.7.1)
Requirement already satisfied: referencing>=0.28.4 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from jsonschema->rembg==2.0.53->-r requirements_versions.txt (line 21)) (0.30.2)
Requirement already satisfied: rpds-py>=0.7.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from jsonschema->rembg==2.0.53->-r requirements_versions.txt (line 21)) (0.9.2)
Requirement already satisfied: contourpy>=1.0.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (1.1.0)
Requirement already satisfied: cycler>=0.10 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (4.42.0)
Requirement already satisfied: kiwisolver>=1.0.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (1.4.4)
Requirement already satisfied: pyparsing<3.1,>=2.3.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (3.0.9)
Requirement already satisfied: python-dateutil>=2.7 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from matplotlib~=3.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pandas<3.0,>=1.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (2023.3)
Requirement already satisfied: tzdata>=2022.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pandas<3.0,>=1.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (2023.3)
Requirement already satisfied: ppft>=1.7.6.8 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pathos>=0.2.9->translators==5.8.9->-r requirements_versions.txt (line 20)) (1.7.6.8)
Collecting dill>=0.3.8 (from pathos>=0.2.9->translators==5.8.9->-r requirements_versions.txt (line 20))
  Obtaining dependency information for dill>=0.3.8 from https://files.pythonhosted.org/packages/c9/7a/cef76fd8438a42f96db64ddaa85280485a9c395e7df3db8158cfec1eee34/dill-0.3.8-py3-none-any.whl.metadata
  Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB)
Collecting pox>=0.3.4 (from pathos>=0.2.9->translators==5.8.9->-r requirements_versions.txt (line 20))
  Obtaining dependency information for pox>=0.3.4 from https://files.pythonhosted.org/packages/e1/d7/9e73c32f73da71e8224b4cb861b5db50ebdebcdff14d3e3fb47a63c578b2/pox-0.3.4-py3-none-any.whl.metadata
  Using cached pox-0.3.4-py3-none-any.whl.metadata (8.0 kB)
Collecting multiprocess>=0.70.16 (from pathos>=0.2.9->translators==5.8.9->-r requirements_versions.txt (line 20))
  Obtaining dependency information for multiprocess>=0.70.16 from https://files.pythonhosted.org/packages/bc/f7/7ec7fddc92e50714ea3745631f79bd9c96424cb2702632521028e57d3a36/multiprocess-0.70.16-py310-none-any.whl.metadata
  Using cached multiprocess-0.70.16-py310-none-any.whl.metadata (7.2 kB)
Requirement already satisfied: annotated-types>=0.4.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,<3.0.0,>=1.7.4->gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.5.0)
Requirement already satisfied: pydantic-core==2.4.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,<3.0.0,>=1.7.4->gradio==3.41.2->-r requirements_versions.txt (line 13)) (2.4.0)
Requirement already satisfied: six>=1.10.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from PyExecJS>=1.5.1->translators==5.8.9->-r requirements_versions.txt (line 20)) (1.16.0)
Requirement already satisfied: charset-normalizer<4,>=2 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from requests->transformers==4.30.2->-r requirements_versions.txt (line 3)) (3.1.0)
Requirement already satisfied: urllib3<3,>=1.21.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from requests->transformers==4.30.2->-r requirements_versions.txt (line 3)) (2.0.3)
Requirement already satisfied: networkx in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from torch>=1.6.0->torchsde==0.2.5->-r requirements_versions.txt (line 1)) (3.1)
Requirement already satisfied: click>=7.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from uvicorn>=0.14.0->gradio==3.41.2->-r requirements_versions.txt (line 13)) (8.1.6)
Requirement already satisfied: humanfriendly>=9.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from coloredlogs->onnxruntime==1.16.3->-r requirements_versions.txt (line 18)) (10.0)
Requirement already satisfied: starlette<0.28.0,>=0.27.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from fastapi->gradio==3.41.2->-r requirements_versions.txt (line 13)) (0.27.0)
Requirement already satisfied: zipp>=0.5 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from importlib-metadata->diffusers==0.25.1->-r requirements_versions.txt (line 16)) (3.17.0)
Collecting platformdirs>=2.5.0 (from pooch->rembg==2.0.53->-r requirements_versions.txt (line 21))
  Obtaining dependency information for platformdirs>=2.5.0 from https://files.pythonhosted.org/packages/55/72/4898c44ee9ea6f43396fbc23d9bfaf3d06e01b83698bdf2e4c919deceb7c/platformdirs-4.2.0-py3-none-any.whl.metadata
  Using cached platformdirs-4.2.0-py3-none-any.whl.metadata (11 kB)
Requirement already satisfied: numba!=0.49.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from pymatting->rembg==2.0.53->-r requirements_versions.txt (line 21)) (0.58.1)
Collecting imageio>=2.27 (from scikit-image->rembg==2.0.53->-r requirements_versions.txt (line 21))
  Obtaining dependency information for imageio>=2.27 from https://files.pythonhosted.org/packages/c0/69/3aaa69cb0748e33e644fda114c9abd3186ce369edd4fca11107e9f39c6a7/imageio-2.33.1-py3-none-any.whl.metadata
  Using cached imageio-2.33.1-py3-none-any.whl.metadata (4.9 kB)
Requirement already satisfied: tifffile>=2022.8.12 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from scikit-image->rembg==2.0.53->-r requirements_versions.txt (line 21)) (2024.1.30)
Collecting lazy_loader>=0.3 (from scikit-image->rembg==2.0.53->-r requirements_versions.txt (line 21))
  Obtaining dependency information for lazy_loader>=0.3 from https://files.pythonhosted.org/packages/a1/c3/65b3814e155836acacf720e5be3b5757130346670ac454fee29d3eda1381/lazy_loader-0.3-py3-none-any.whl.metadata
  Using cached lazy_loader-0.3-py3-none-any.whl.metadata (4.3 kB)
Requirement already satisfied: mpmath>=0.19 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from sympy->onnxruntime==1.16.3->-r requirements_versions.txt (line 18)) (1.3.0)
Requirement already satisfied: tomli>=2.0.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from yapf->groundingdino-py==0.4.0->-r requirements_versions.txt (line 22)) (2.0.1)
Requirement already satisfied: multidict<7.0,>=4.5 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch_lightning==1.9.4->-r requirements_versions.txt (line 11)) (1.3.1)
Requirement already satisfied: exceptiongroup in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from anyio<5.0,>=3.0->httpcore<0.18.0,>=0.15.0->httpx==0.24.1->-r requirements_versions.txt (line 17)) (1.1.2)
Requirement already satisfied: pyreadline3 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime==1.16.3->-r requirements_versions.txt (line 18)) (3.4.1)
Requirement already satisfied: llvmlite<0.42,>=0.41.0dev0 in d:\ai-art\fooocus-mashb1t\python_embeded\lib\site-packages (from numba!=0.49.0->pymatting->rembg==2.0.53->-r requirements_versions.txt (line 21)) (0.41.1)
Using cached diffusers-0.25.1-py3-none-any.whl (1.8 MB)
Using cached translators-5.8.9-py3-none-any.whl (54 kB)
Using cached rembg-2.0.53-py3-none-any.whl (32 kB)
Using cached cryptography-42.0.2-cp39-abi3-win_amd64.whl (2.9 MB)
Using cached huggingface_hub-0.20.3-py3-none-any.whl (330 kB)
Using cached importlib_resources-6.1.1-py3-none-any.whl (33 kB)
Using cached lxml-5.1.0-cp310-cp310-win_amd64.whl (3.9 MB)
Using cached pathos-0.3.2-py3-none-any.whl (82 kB)
Using cached importlib_metadata-7.0.1-py3-none-any.whl (23 kB)
Using cached opencv_python_headless-4.9.0.80-cp37-abi3-win_amd64.whl (38.5 MB)
Using cached pooch-1.8.0-py3-none-any.whl (62 kB)
Using cached pycocotools-2.0.7-cp310-cp310-win_amd64.whl (84 kB)
Using cached PyMatting-1.1.12-py3-none-any.whl (52 kB)
Using cached scikit_image-0.22.0-cp310-cp310-win_amd64.whl (24.5 MB)
Using cached yapf-0.40.2-py3-none-any.whl (254 kB)
Using cached dill-0.3.8-py3-none-any.whl (116 kB)
Using cached imageio-2.33.1-py3-none-any.whl (313 kB)
Using cached lazy_loader-0.3-py3-none-any.whl (9.1 kB)
Using cached multiprocess-0.70.16-py310-none-any.whl (134 kB)
Using cached platformdirs-4.2.0-py3-none-any.whl (17 kB)
Using cached pox-0.3.4-py3-none-any.whl (29 kB)
Installing collected packages: pox, platformdirs, opencv-python-headless, lxml, lazy_loader, importlib-resources, importlib-metadata, imageio, dill, yapf, scikit-image, pymatting, pooch, multiprocess, huggingface-hub, cryptography, supervision, pycocotools, pathos, diffusers, translators, rembg, groundingdino-py

stderr: DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.*; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'D:\\AI-Art\\Fooocus-mashb1t\\python_embeded\\Lib\\site-packages\\cv2\\cv2.pyd'
Consider using the `--user` option or check the permissions.


[notice] A new release of pip is available: 23.2.1 -> 23.3.2
[notice] To update, run: D:\AI-Art\Fooocus-mashb1t\python_embeded\python.exe -m pip install --upgrade pip

CMD Failed requirements: install -r "requirements_versions.txt"
Total VRAM 6144 MB, total RAM 16326 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 SUPER : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: D:\AI-Art\Fooocus-Common\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\AI-Art\Fooocus-Common\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\AI-Art\Fooocus-Common\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\AI-Art\Fooocus-Common\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Exception in thread Thread-4 (worker):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "D:\AI-Art\Fooocus-mashb1t\Fooocus\modules\async_worker.py", line 47, in worker
    from modules.censor import censor_batch
  File "D:\AI-Art\Fooocus-mashb1t\Fooocus\modules\censor.py", line 4, in <module>
    from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
ModuleNotFoundError: No module named 'diffusers'

Version

Fooocus 2.1,864

Where are you running Fooocus?

Locally

Operating System

Windows 10

What browsers are you seeing the problem on?

Chrome

[Bug]: History Log link gives error when Output folder has been changed

Prerequisites

Describe the problem

Hi

Thanks for the excellent UI.
Minor irritation:

History Log link gives error:
{
"detail": "File not allowed: C:/AI/Outputs/Fooocus-mash/2024-02-25/log.html."
}
when Output folder has been changed.

The file quoted in the error does open happily when address from error is copied pasted into the browser.

The log files opens when the output folder has not been moved.

Browser is Edge on Windows 11

The behaviour is the same on both yours mashb1t and lllyasviel

It is not a problem, probably some permissions issue, wanted you to be aware of it.

Thanks
Mike

Full console log output

C:\AI\Fooocus-Mash>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Fast-forward merge
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.865 (mashb1t)
Total VRAM 6144 MB, total RAM 65461 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 2060 : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: C:\AI\Models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\AI\Models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\AI\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\AI\Models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.49 seconds
Started worker with PID 19428
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 4.0
[Parameters] Seed = 8759536093732494450
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] steelorchid, intricate, elegant, highly detailed, wonderful colors, sweet, sharp focus, professional composition, cute, magical ambient, dynamic background, cool light, modern, advanced, thought, full color, beautiful, creative, positive, perfect, pure, attractive, artistic, loving, delicate, pretty, friendly, best, successful, romantic, peaceful, unique, vibrant
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.21 seconds
[Fooocus] Encoding negative #1 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (1024, 1024)
Preparation time: 2.14 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
loading in lowvram mode 3113.787935256958
[Fooocus Model Management] Moving model(s) has taken 5.45 seconds
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 30/30 [00:28<00:00,  1.06it/s]
Requested to load AutoencoderKL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.40 seconds
Image generated with private log at: C:\AI\Outputs\Fooocus-mash\2024-02-25\log.html
Generating and saving time: 41.37 seconds
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
Total time: 43.55 seconds
[Fooocus Model Management] Moving model(s) has taken 0.67 seconds

Version

Fooocus version: 2.1.865 (mashb1t)

Where are you running Fooocus?

Locally

Operating System

Windows 11 build 26058

What browsers are you seeing the problem on?

Chrome, Microsoft Edge

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.