Comments (8)
Here is an example, in case you were looking for that sort of thing (2 gpus used here):
python -m torch.distributed.launch --nproc_per_node=2 main_pretrain.py --batch_size 16 \
--world_size 2 \
--accum_iter 4
--model mae_vit_base_patch16 \
--norm_pix_loss \
--mask_ratio 0.75 \
--epochs 800 \
--warmup_epochs 40 \
--blr 1.5e-4 --weight_decay 0.05
PS: works with pytorch 1.8.X
from mae.
Node 1:
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 main_pretrain.py --batch_size 16 \
--world_size 2 \
--accum_iter 4
--model mae_vit_base_patch16 \
--norm_pix_loss \
--mask_ratio 0.75 \
--epochs 800 \
--warmup_epochs 40 \
--blr 1.5e-4 --weight_decay 0.05
Node 2:
python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 main_pretrain.py --batch_size 16 \
--world_size 2 \
--accum_iter 4
--model mae_vit_base_patch16 \
--norm_pix_loss \
--mask_ratio 0.75 \
--epochs 800 \
--warmup_epochs 40 \
--blr 1.5e-4 --weight_decay 0.05
Modify batch size according to your hardware.
PS: For pytorch >= 1.9 : replace python -m torch.distributed.launch
with torchrun
PS2: I haven't tried multi-node multi GPU training myself.
from mae.
submitit is used for "Multi-node Multi-gpu distributed training"
from mae.
Allright, maybe the submitit script is for some Server cluster different from I am used,I am solve the problem use the main_pretrain.py script with the torch.distributed.launch
. And thanks for answer my question.
from mae.
Allright, maybe the submitit script is for some Server cluster different from I am used,I am solve the problem use the main_pretrain.py script with the
torch.distributed.launch
. And thanks for answer my question.
sure,submitit is used for Slurm cluster. and torch.distributed.launch is another common way.
from mae.
What changes should I make to the command above if I have two nodes with 8 gpus each and I also want to keep the batch size the same?
from mae.
Here is an example, in case you were looking for that sort of thing (2 gpus used here):
python -m torch.distributed.launch --nproc_per_node=2 main_pretrain.py --batch_size 16 \ --world_size 2 \ --accum_iter 4 --model mae_vit_base_patch16 \ --norm_pix_loss \ --mask_ratio 0.75 \ --epochs 800 \ --warmup_epochs 40 \ --blr 1.5e-4 --weight_decay 0.05
PS: works with pytorch 1.8.X
Thanks for your sharing!
I want to raise another question about the different performance (the speed of loss decreasing) between this two different implementation.
As what I have tested on the completely same model, original implementation (based on submitit )performs much better than another implementation w.r.t the speed of loss decreasing in the case of Single V100 and Single node.
Maybe you can also test this interesting thing. Thanks a lot.
from mae.
Hi, May I ask how to do the pre-training on single GPU?
from mae.
Related Issues (20)
- Small naming error - masking generation HOT 1
- Is the visualization result normal? HOT 2
- Ask for segmentation finetune code
- Confusion in The Loss Function Implementation.
- param_groups_lrd for layer decay HOT 1
- Loss is considerably worse on custom data set with different mean and standard deviation HOT 2
- Error in loading pretrained weight for 'mae_vit_base_patch16' HOT 2
- About the gan-loss HOT 2
- patchify and unpatchify HOT 1
- I found both LLAMA and MAE used smaller beta2 in ADAMW optimizer during pre-training. Is that any intuition behind such setting? HOT 1
- How to obtain the reconstructed image for inference and masked
- model.fc_norm is not trained in linear probing
- visualization attention map.
- Could you provide the pretrained checkpoints of both encoder and decoder in MAE? HOT 2
- Is the training procedure result normal? Masked regions do not improve and appear to be random noise. HOT 2
- Two different checkpoints for each ViT type HOT 5
- Code: Compatible to any channels for function patchify and unpatchify HOT 2
- collab notebook error HOT 2
- How to obtain the complete reconstructed image?
- Can run interactive visualization demo with GPU?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mae.