Giter Site home page Giter Site logo

Comments (21)

lllyasviel avatar lllyasviel commented on May 27, 2024 7

It's not working on 6GB GeForce RTX 3060

OK we know. Let us solve 6GB now.

from controlnet.

lllyasviel avatar lllyasviel commented on May 27, 2024 5

I will try to add sliced attention this weekend. (and copy some codes from Automatic1111 perhaps.)
Right now it seems the model OOM on 8GB gpus.
See also CompVis/stable-diffusion#39

from controlnet.

lllyasviel avatar lllyasviel commented on May 27, 2024 5

Low VRAM mode added.

https://github.com/lllyasviel/ControlNet/blob/main/docs/low_vram.md

Tested on several 8GB cards. Lets see if work on 6GB.

from controlnet.

scarbain avatar scarbain commented on May 27, 2024 2

Great, thanks! I have a 12GB vram gpu, does it also work for training ? Currently I can't train, even with a batch size of 1 I get OOM errors

from controlnet.

sovit-123 avatar sovit-123 commented on May 27, 2024 2

@My12123
Maybe you can make save_memory = True in config.py if you have not already. That will make at least the canny model to run on 8 GB GPU. I tried it with RTX 3070.
Also, you can install xformers with pip install xformers. It seems to work pretty well in saving a gigabyte or so of memory and the inference is twice as fast. If you install xformers, just make sure you have PyTorch 1.13.1 (the latest as of now).

from controlnet.

yamada321 avatar yamada321 commented on May 27, 2024 1

I also tried xformers, but I keep getting No operator found for memory_efficient_attention_backward.

I've seen this too and I suspect it was a faulty installation in my case. I had pytorch 1.13.1 from conda & in a conda env, installed xformers via pip, and memory_efficient_attention_backward was the error. After replacing xformers with its conda release, the error was gone and I was back to oom at the beginning of the training (also on 12GB vram). nvm my conda xformers install had issues.

from controlnet.

sovit-123 avatar sovit-123 commented on May 27, 2024 1

Yes. But it also works if we use the latest version of PyTorch, along with the latest version of PyTorch lightning.

from controlnet.

rocryptogroup avatar rocryptogroup commented on May 27, 2024

same here

from controlnet.

AmusedDiffuser avatar AmusedDiffuser commented on May 27, 2024

Amazing work, very inspiring and i can't wait to try this. For now, just spent all day downloading and troubleshooting on my slow internet, just to find this OOM. I'm on 6gb 1660ti, so i have to run 1111 with --medvram --precision full --no-half.... Any hope at all of this ever working for me, or should i write this one off? Any chance for a Colab notebook for the rest of us?..

from controlnet.

lllyasviel avatar lllyasviel commented on May 27, 2024

Great, thanks! I have a 12GB vram gpu, does it also work for training ? Currently I can't train, even with a batch size of 1 I get OOM errors

will try. but it should work perhaps since attention layer is sliced already.

from controlnet.

scarbain avatar scarbain commented on May 27, 2024

I also tried using xformers but I kept having incompatibility issues

from controlnet.

nichovski avatar nichovski commented on May 27, 2024

It's not working on 6GB GeForce RTX 3060

from controlnet.

ro99 avatar ro99 commented on May 27, 2024

Great, thanks! I have a 12GB vram gpu, does it also work for training ? Currently I can't train, even with a batch size of 1 I get OOM errors

Same. Besides batch size, I also tried with accumulate_grad_batches and save_memory, but still no luck to train with 12GB vram. I also tried xformers, but I keep getting No operator found for `memory_efficient_attention_backward.

from controlnet.

sovit-123 avatar sovit-123 commented on May 27, 2024

Hello.
Great work with ControlNet. Any updates on 6GB VRAM solutions?

from controlnet.

My12123 avatar My12123 commented on May 27, 2024

I still have this error save_memory = True it didn't help much
RuntimeError: CUDA out of memory. Tried to allocate 290.00 MiB (GPU 0; 8.00 GiB total capacity; 6.29 GiB already allocated; 0 bytes free; 7.08 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

from controlnet.

twotwoiscute avatar twotwoiscute commented on May 27, 2024

The version mentionned in environment.yaml is 1.12.1 though.

- pytorch=1.12.1

from controlnet.

engrmusawarali avatar engrmusawarali commented on May 27, 2024

I am not able to train the network. Even I am using the save memory option. @sovit-123 can you help me.

from controlnet.

sovit-123 avatar sovit-123 commented on May 27, 2024

@engrmusawarali
Hi, even with the save memory option you need at least 18 GB of VRAM to train the model. That's what I found out when it was initially released. Not sure if that requirement has changed since then.

from controlnet.

engrmusawarali avatar engrmusawarali commented on May 27, 2024

have you tried small-scale training. if yes then how to process with it can you guide me @sovit-123

from controlnet.

sovit-123 avatar sovit-123 commented on May 27, 2024

@engrmusawarali
I have not tried it yet. But planning to do it soon.

from controlnet.

Hbhatt-merexgenAI avatar Hbhatt-merexgenAI commented on May 27, 2024

the training doesnt work on 12GB vram rtx 3060

from controlnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.