victarry / stable-dreambooth Goto Github PK
View Code? Open in Web Editor NEWDreambooth implementation based on Stable Diffusion with minimal code.
Dreambooth implementation based on Stable Diffusion with minimal code.
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 47.46 GiB total capacity; 44.29 GiB already allocated; 862.56 MiB free; 45.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
original inversion has a trouble with that, it synhetsized mutations of the trained subjects unles you overfit and they you cant edit style anymore and composition that much.
Is it really like dreambooth and it retains identity ?
Error is:
Traceback (most recent call last):
File "train.py", line 206, in <module>
train_loop(config, model, noise_scheduler, optimizer, train_dataloader)
File "train.py", line 129, in train_loop
latents = model.vae.encode(imgs).mode() * 0.18215
AttributeError: 'AutoencoderKLOutput' object has no attribute 'mode'
It seems related to the diffusers library not running on the gpu? I am in an environment with an a6000 though
Maybe you guys should cooperate?
It might be helpful to explain the needed images to train a new model.
The partial example with some images in data/dogs/instance is more confusing than it helps. Would it be possible for you to include an example training dataset?
I have a similar question AttributeError: 'StableDiffusionPipeline' object has no attribute 'parameters' diffusers 0.15.0 can you help me solve it?
I was able to successfully train a model after chaning the diffusers version and changing batch size to 2 but when running inference on the output I only get reconstructions of the training images
File "train.py", line 206, in
train_loop(config, model, noise_scheduler, optimizer, train_dataloader)
File "train.py", line 131, in train_loop
noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps.cpu().numpy())
File "/HPS/EgofaceTrial/work/anaconda3/envs/stable-diffusion/lib/python3.8/site-packages/diffusers/schedulers/scheduling_ddpm.py", line 303, in add_noise
timesteps = timesteps.to(original_samples.device)
AttributeError: 'numpy.ndarray' object has no attribute 'to'
How to get this working on an EC2 instance? any steps/tutorials?
Thank you.
for text in datasets:
with torch.no_grad():
images = model(text, height=512, width=512, num_inference_steps=50)["sample"] ///
sample key work error in conda activate enviroment
Hi, similar to https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion, would you be interested in adding example colab notebook for training and inference users the diffusers pipeline?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.