Comments (20)
It's working in my case
I have 16GB RAM and Nvidia GTX 1650 with 4GB VRAM
example :
Yeah but that's true , it make my system unusable !!
from fooocus.
Both GTX 1650 and 850 has 4GB vram, but it seems that 1650 works and is "smarter" to use vram in more efficiency way.
Probably 850 is really very difficult to support. We will probably add some descriptions in the Readme to make it clearer
from fooocus.
I have a 3080FE and it has 10gb of ram. I think we need a medram setting cause it takes a looooong time to render some stuff. But if I keep it simple (low poly, "whale"), it renders about as fast as I'd expect.
from fooocus.
It eats up almost all of my RAM too, making my computer unusable. Sadly, for now, buying a new card is still the proper way to use SDXL.
from fooocus.
I just found out a temp fix for the vram issue on my desktop. Disable the video card then enable it, freed up almost half my vram (powershell)
Get-PnpDevice -FriendlyName "NVIDIA GeForce RTX 3080" | Disable-PnpDevice
Sleep -Seconds 5
Get-PnpDevice -FriendlyName "NVIDIA GeForce RTX 3080" | Enable-PnpDevice
from fooocus.
Fails for me the same way as for the OP.
I have 16GB RAM and an GTX 1050 Ti with 4GB VRAM
I noticed that it will gobble all of free space on C: that (cca 7GB) before failing due to not enough memory.
from fooocus.
I have 32GB RAM and an 3050Ti with 4GB VRAM. Wish it can be supported.
from fooocus.
OK as suspected, I needed to free enough disk space and now it will run.
The error message was misleading as it's not VRAM or RAM but disk space that was causing it to fail.
Curiously enough, it seems to only be using about 2GB of VRAM
from fooocus.
16GB of RAM and 4GB of VRAM (GTX 950m). I get no errors but it just gets stuck when I hit Generate.
from fooocus.
I have an rtx 3050ti, and it's just stuck in this state and not progressing.
from fooocus.
Also fails on mine: GTX 960M, 4GB VRAM, 16 GB RAM:
D:\JJ\python\Fooocus_win64_2-1-791>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --preset realistic
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--preset', 'realistic']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.802
Running on local URL: http://127.0.0.1:7865
To create a public link, set `share=True` in `launch()`.
Total VRAM 4096 MB, total RAM 16250 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
Set vram state to: LOW_VRAM
Disabling smart memory management
Device: cuda:0 NVIDIA GeForce GTX 960M : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
adm 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors
Request to load LoRAs [('SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25)] for model [D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors].
Loaded LoRA [D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\loras\SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for model [D:\JJ\python\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors] with 1052 keys at weight 0.25.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 10.48 seconds
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 7801272631270716136
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] cat, cinematic, complex, highly detailed, extremely, sharp focus, beautiful, stunning composition, symmetry, great colors, aesthetic, very inspirational, colorful, deep color, inspiring, original, full bright, lovely, cute, artistic, intricate, elegant, perfect light, fine detail, clear, ambient background, professional, creative, positive, amazing, pure, wonderful, unique
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] cat, very coherent, cute, cinematic, detailed, intricate, stunning, highly refined, epic composition, magical atmosphere, full color, elegant, luxury, amazing detail, professional, winning, thoughtful, calm, beautiful, unique, best, awesome, perfect, ambient light, shining, illuminated, translucent, fine, artistic, pure, positive, attractive, creative, vibrant
[Fooocus] Encoding positive #1 ...
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
Preparation time: 41.61 seconds
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
D:\JJ\python\Fooocus_win64_2-1-791>pause
Press any key to continue . . .
from fooocus.
Yes I should modify the Readme to say that the 4GB VRAM needs to have an GPU Arch to support float16.
But how can GTX 960M and SDXL be put in one same sentence
from fooocus.
I have 8GB RAM and 1050TI 4GB. Can I use it?
from fooocus.
Yes I should modify the Readme to say that the 4GB VRAM needs to have an GPU Arch to support float16.
But how can GTX 960M and SDXL be put in one same sentence
Because
- SDXL and 4 GB VRAM were in the same sentence (or at least in the same repo)
- The 960M worked fine with fp16 1.5 models
Now it's entirely possible it's another issue and I can accept that. But it seems this repo is implying it'll work with all 4 GB Nvidia GPUs when I've found a countetexample: my old laptop's.
from fooocus.
from fooocus.
from fooocus.
Fails for me the same way as for the OP. I have 16GB RAM and an GTX 1050 Ti with 4GB VRAM I noticed that it will gobble all of free space on C: that (cca 7GB) before failing due to not enough memory.
same issue. any way I can prevent it from using space on C drive?
from fooocus.
You can not as when VRAM is exhausted RAM is used and when RAM is (near) full the swap is used.
Preventing offloading from GPU is possible, but you'll not be able to run Fooocus then.
See https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md
from fooocus.
Fails for me the same way as for the OP. I have 16GB RAM and an GTX 1050 Ti with 4GB VRAM I noticed that it will gobble all of free space on C: that (cca 7GB) before failing due to not enough memory.
same issue. any way I can prevent it from using space on C drive?
Same here too. no clear instructions?
from fooocus.
@cp818 let's do not revive a 4 months old thread, please open a discussion or a new issue and provide all necessary information such as terminal output, hardware specs etc.
from fooocus.
Related Issues (20)
- [Feature Request]: Output subfolders HOT 2
- [Bug]: "default vae" option is no longer there, "None" after refreshing files HOT 2
- [Feature Request]: Hyper-SD & TCD Preset HOT 2
- [Bug]: Drag & Drop not working on Firefox browser. HOT 1
- I found that the Moving model (s) takes a long time to generate. Is there any solution to this problem? HOT 2
- [Bug]: Dockerfile in v2.4.1 Fails to Create Image HOT 2
- [Bug]: the turbo scheduler doesnt work HOT 1
- [Feature Request]: Omost integration
- [Bug]: Kaggle Installation error HOT 1
- [Bug]: black screen on output HOT 12
- is it possible to add magic selection tool in inpaint [Feature Request]: HOT 1
- [Bug]: Generate doesn't work when deployed to Kubernetes HOT 4
- [Bug]: inpaint choose Method and refiner HOT 8
- [Feature Request]: I hope to replace the LCM model with the PCM model because it has a more friendly effect. HOT 3
- [Bug]: Fooocus 2.4.2 TCD broken - not working HOT 5
- [Bug]: Running Fooocus with an AMD CPU results in the CPU utilization maxing out HOT 2
- [Feature Request]: Is it possible to "teach" Fooocus new words? HOT 1
- [Bug]: Constantly removes my custom checkpoints folder HOT 3
- [Feature Request]: mask generation mode HOT 1
- [Feature Request]: Automatic upscale after every generation.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fooocus.