Comments (8)
Ok, I can work with that to reproduce the issue and find a fix… hope it won’t lead to a rat’s nest if code change.
from kohya_ss.
ok, i did some digging. Seems like the problem is talked about here:
https://github.com/kohya-ss/sd-scripts/issues/601#issuecomment-1605241097
Seems like additional parameters needs to be passed as str rather than list in toml
I am now running these additional args in cli since they are ignored in toml
@bmaltais is there any way to adjust this?
from kohya_ss.
Can you provide an example of that specific toml variable as what it is setting it as now... and what it should be set to?
I should be able to transform to the right format based on the example...
from kohya_ss.
Going to close this issue:
- My goal is to pass
--lr_scheduler_type "CosineAnnealingLR" --lr_scheduler_args "T_max=250"
through GUI - BUT it appears this can only be passed through CLI because it is sd-scripts in origin
- Hence, there is no point to pass it through the GUI Additional Parameters
Thanks @bmaltais
from kohya_ss.
@bmaltais
let me rephrase. I'd like to pass:
--lr_scheduler_type "CosineAnnealingLR" --lr_scheduler_args "T_max=250"
in GUI
But at the moment, it only works in CLI. If you place it in GUI Additional Parameters it wont work
See:
@DarkAlchy Just pass --lr_scheduler_type and --lr_scheduler_args to command line.
Most of schedulers in torch.optim.lr_scheduler can be imported directly.
For example:
accelerate launch "train_network.py" ... --lr_scheduler_type "CosineAnnealingLR" --lr_scheduler_args "T_max=100"
Here's the thread: https://github.com/kohya-ss/sd-scripts/issues/601#issuecomment-1602496423
Seems like these args need to be passed as string like in CLI, but toml is list.
I don't know if this is the answer, but not sure it was ever addressed:
https://github.com/kohya-ss/sd-scripts/issues/601#issuecomment-1605241097
@DarkAlchy Oh, my fault.
It seems that args.blahblah just passed str instead of a list through .toml.
We should pass a list through .toml:
lr_scheduler_args = ['T_max=100', ...]
Could you somehow run the arg in CLI (necessary) but place it in toml in a way that it passes to wandb at the same time?
Does this make sense.
2 results:
- additional parameter scheduling works through GUI
- . the parameter passes to wandb
from kohya_ss.
Ok, I can work with that to reproduce the issue and find a fix… hope it won’t lead to a rat’s nest if code change.
I know everyone using scheduling args through GUI and that want to pass to wandb will be appreciative
from kohya_ss.
OK, It is now available in the dev
branch:
I have added it to the ./test/config/dreambooth-AdamW8bit-toml.json
test config so you can easilly see it in action.
The dropdown list allow you to type custom scheduler names as the list is very short. If you know of other choices I can easilly add them to the list.
from kohya_ss.
@bmaltais confirming, this is working perfectly. Thanks!
from kohya_ss.
Related Issues (20)
- unable to train sdxl lora using runpod training stops without throwing error HOT 2
- pip install argument might be passed incorrectly HOT 2
- latent cache error RuntimeError: NaN detected in latents
- missing options in Dreambooth training HOT 17
- The second GPU can't be used
- getting an error for using optimizer
- Textual Inversion Broken | SDXL not training at all, SD1.5 other error HOT 7
- Very crazy mistake! When I want to train, it reports an error! I don't know where the problem is! Can you help me? HOT 1
- Confusing sample results when training Lora HOT 4
- Feature Request: if "WANDB run name" empty, use "Trained Model output name" HOT 1
- Lora tab shows no SDXL checkbox--resulting in "Size mismatch" HOT 1
- Presets -- not showing in the toggle down menu after adding to user_presets HOT 4
- dataset repeats is incorrect in wandb -- how do i check?
- Why do the trained LoRA models consistently have a size of 41MB, even when I adjust the number of steps? HOT 2
- Failed to import diffusers.loaders.ip_adapter because of the following error (look up to see its traceback): module 'torch' has no attribute 'compiler'
- Training Fails To Initialise HOT 1
- subprocess.CalledProcessError: Command HOT 1
- Validating ./models/vae existence... FAILURE: not a valid file or folder HOT 1
- feature request: toml filename with model filename + date-time
- ERROR accelerate not found HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kohya_ss.