Comments (3)
The slowdown you report seems reasonable considering that 8 A100 GPUs on a single computer are connected via NVLink or the PCIe bus, and both have considerably more bandwidth than a 100Gb/s NIC which you used for 2 nodes. One note I would like to add is that the training speed is calculated using the formula: average it/s * local batch size * number of GPUs across all nodes. So your speed drop in percentage is not 3/18 but 6/18.
I think you may be confusing allreduce and Torchrec's KJTAllToAll. Allreduce would be used in the context of averaging gradients where there is model replication (data parallelism). It's not used for model parallel parts such as sharded embedding tables, because there aren't any replicated trainable parameters to need their gradients to be averaged and made the same on all replicas.
Torchrec uses a sharding planner to pick the sharding plan it considers best. Regarding NCCL algorithms like Ring and Tree, the NCCL repo would be a better place to ask or read about the tradeoffs of different settings. I think prior issues may cover the topic, such as this issue.
from dlrm.
Hi @samiwilf, thanks for the insights.
The reason I mentioned allreduce is that when you zoom into the profile under sparse_data_dist
name scope, there are a bunch of Allreduce
calls:
Some Send/Recives can be seen as well, but I was wondering what are those allreduce calls?
Another question that I had regarding the sparse_data_dist
was that based on the code, it seems that this part is the initial feature distribution among ranks for the next steps which is supposed to be pipelined. However, from the profiling results, it seems that it happens right between the forward pass and the backward pass of the current step which make things look sequential rather than pipeline:
Am I missing something here? I would appreciate any insight on how to map the profile to the pipeline code?
from dlrm.
I would recommend performing an ablation study of the code and profiling iteratively to pinpoint which parts of the model or training loop correspond to the parts of the profile in question. You could try switching to TorchRec's nightly build, since sparse_data_dist has been replaced with start_sparse_data_dist and wait_sparse_data_dist. It may provide a more granular and informative profile. Lastly, although TorchRec is used for the embedding tables, the bottom and top MLPs are still ordinary PyTorch DDP modules. They use allreduce for gradient averaging.
from dlrm.
Related Issues (20)
- ONNX export from pretrained pt file HOT 1
- Embedding values in different training environment HOT 2
- RuntimeError: [enforce fail at embedding_lookup_idx.cc:215] current == index_size. 0 vs -1. Your input seems to be incorrect: the sum of lengths values should be the size of the indices tensor, but it appears not. HOT 2
- How does pytorch handle backward pass in a multi-GPU setting? HOT 2
- how to train dlrm with multi-gpu HOT 1
- What is the training cycle of DLRM? HOT 4
- Embedding_bag operator on GPU HOT 1
- fail to run dlrm_s_pytorch.py on single node multiple GPUs with nccl HOT 1
- python3 HOT 1
- how to inference ./dlrm_s_criteo_kaggle.sh HOT 1
- Getting Nan loss when training dlrm with Kaggle Criteo dataset HOT 7
- Question regarding the pooling in QR trick HOT 1
- Issue with Activating UVM Function in torchrec_dlrm
- Size of embedding tables in MLPerf checkpoint
- Segmentation Fault on M1 Mac. HOT 1
- Help with installation. No module named caffe2
- Docker Build failing HOT 1
- Unable to preprocess Criteo Kaggle Display Advertising Challenge Dataset
- What's the purpose of torchrec_dlrm/?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dlrm.