Comments (19)
One thing I found to reduce memory. This code is based on Textual Inversion, and TI does something here (https://github.com/rinongal/textual_inversion/blob/main/ldm/modules/diffusionmodules/util.py#L112), which disable gradient checkpointing in a hard-code way. This is because in TI, the Unet is not optimized. However, here we optimize the Unet, so we can turn on the gradient checkpoint point trick, as in the original SD repo (here https://github.com/CompVis/stable-diffusion/blob/main/ldm/modules/diffusionmodules/util.py#L112). The gradient checkpoint is default to be True in config (here https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/blob/main/configs/stable-diffusion/v1-finetune_unfrozen.yaml#L47)
from dreambooth-stable-diffusion.
It is possible for 256x256 resolution, and it's works, but lower quality
from dreambooth-stable-diffusion.
Hi there,
I'm getting the same OOM error and running the code on a AWS p3.8xlarge instance 4x16GB. I guess I get the OOM because the model doesn't fit in one GPU?
Training isn't optimised for model parallelism. Multiple GPUs can only be used for data parallelism.
Is this correct?
from dreambooth-stable-diffusion.
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 23.65 GiB total capacity; 22.01 GiB already allocated; 26.44 MiB free; 22.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
What can i do
Hello @liwei0826 could you solve it?
from dreambooth-stable-diffusion.
Seems like the fine tuning requires far more memory than inference. My RTX 3090 with 24 GB VRAM is not enough for the training too. Only A100 with 40GB or A6000 with 48GB VRAM can do the fine tuning job so far.
from dreambooth-stable-diffusion.
So, can anyone clarify what requirements are for hardware? Can we add "I'm using _____" to the Readme so users have a realistic idea of hardware to expect?
from dreambooth-stable-diffusion.
I was originally using A6000 with 48G ram, now after some optimization I am sure it works on V100 with 32G ram. I would like to see if it works on 3090 with 24GB ram. I think we are close to 24GB but not yet.
from dreambooth-stable-diffusion.
Anyway to get this running with a GTX 1080 with 8GB of vram? How do I reduce the resolution? --W or --H is not working
from dreambooth-stable-diffusion.
2090Ti: 256x256 resolution
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 3.41 GiB already allocated; 9.44 MiB free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
from dreambooth-stable-diffusion.
2090Ti: 256x256 resolution
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 3.41 GiB already allocated; 9.44 MiB free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hey! @AmitMY How did you change the resolution? Did you just put your regularization images at 256x256 or changed some parameter?
from dreambooth-stable-diffusion.
In the config file, I modified every 512 too 256, hoping that would do the trick. Perhaps that was wrong.
Now on a V100, with 32GB, 512x512 trains just fine.
from dreambooth-stable-diffusion.
I tried that too but the nothing seemed to change.
from dreambooth-stable-diffusion.
Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings
from dreambooth-stable-diffusion.
Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings
have you tried this version?
https://github.com/gammagec/Dreambooth-SD-optimized
I trained without any adjustments and I had an OOM in the sixth round.
from dreambooth-stable-diffusion.
Changing ch: 128 (with 4channels) = 512 to ch: 64 (464) = 256 does something but i get a bunch of tensor.size errors.
from dreambooth-stable-diffusion.
Still running at 512x512 eventhough there is nothing with 512 in the config. I've tried changing everything to 256, and using --W and --H prompts. Also "image_size: 32" changing to 16 does nothing. Someone correct me if im playing with the wrong settings
have you tried this version? https://github.com/gammagec/Dreambooth-SD-optimized
I trained without any adjustments and I had an OOM in the sixth round.
Will try that now!
from dreambooth-stable-diffusion.
@alleniver
VRAM usage only goes to about 5gb but i get this error:
from dreambooth-stable-diffusion.
Hey guys, didn't want to start a new thread since this one describes my issue.
I'm also running out of VRAM. I'm following the instructions the best I can, and it says I need 10Gb. I have a 12Gb 3080Ti and somehow running out of memory. Is my GPU just not meant for this, or is it obvious that I've done something wrong?
The stable diffusion weights on huggingface proposes a solution for reducing GPU RAM, but I can't find the line of code to edit after doing a CTRL+F for "StableDiffusionPipeline" on every .py script I could find.
I am on the latest dreambooth version, downloaded 3 days ago. Any ideas? I'm pretty stuck with my limited knowledge.
from dreambooth-stable-diffusion.
+1 to @webeng 's question. I am running it on a g5.12xlarge with 4 GPUs each with 96 total. And I am getting the
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 22.20 GiB total capacity; 3.25 GiB already allocated; 23.12 MiB free; 3.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
message. My guess is its using just one core as that is roughly 1/4 of the 96 GB.
It would be nice to get a confirmation on the above question to be sure.
from dreambooth-stable-diffusion.
Related Issues (20)
- Interface changed for add_argparse_args() of lightning.Trainer HOT 1
- RuntimeError HOT 4
- AttributeError: module 'torch.linalg' has no attribute 'solve'
- Is there any method for loop t-step denoising to restore images and parallel speed up in stable diffusion?
- .
- This repo has many problem on windows
- cuda out of memory on RTX 24gb 3090 HOT 4
- ERROR: Failed building wheel for dlib
- Nothing Habben when Traning
- How to use DreamBooth for unconditional image synthesis.
- Questions about parameters
- ERROR: huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name':
- Implementation of metrics in the Dreambooth paper
- RuntimeError: Error(s) in loading state_dict for LatentDiffusion: size mismatch
- Unable to train Dreambooth on Mac M1
- Dreambooth training with image captions HOT 1
- Size of the trained checkpoint (ckpt) file HOT 1
- Support for inpainting training for dreambooth?
- TypeError: __init__() missing 1 required positional argument: 'personalization_config' HOT 2
- 支持多GPU训练吗 HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dreambooth-stable-diffusion.