Giter Site home page Giter Site logo

Comments (3)

samiwilf avatar samiwilf commented on July 18, 2024

The slowdown you report seems reasonable considering that 8 A100 GPUs on a single computer are connected via NVLink or the PCIe bus, and both have considerably more bandwidth than a 100Gb/s NIC which you used for 2 nodes. One note I would like to add is that the training speed is calculated using the formula: average it/s * local batch size * number of GPUs across all nodes. So your speed drop in percentage is not 3/18 but 6/18.

I think you may be confusing allreduce and Torchrec's KJTAllToAll. Allreduce would be used in the context of averaging gradients where there is model replication (data parallelism). It's not used for model parallel parts such as sharded embedding tables, because there aren't any replicated trainable parameters to need their gradients to be averaged and made the same on all replicas.

Torchrec uses a sharding planner to pick the sharding plan it considers best. Regarding NCCL algorithms like Ring and Tree, the NCCL repo would be a better place to ask or read about the tradeoffs of different settings. I think prior issues may cover the topic, such as this issue.

from dlrm.

MrAta avatar MrAta commented on July 18, 2024

Hi @samiwilf, thanks for the insights.

The reason I mentioned allreduce is that when you zoom into the profile under sparse_data_dist name scope, there are a bunch of Allreduce calls:

Screenshot 2023-09-15 at 11 02 05 AM

Some Send/Recives can be seen as well, but I was wondering what are those allreduce calls?

Another question that I had regarding the sparse_data_dist was that based on the code, it seems that this part is the initial feature distribution among ranks for the next steps which is supposed to be pipelined. However, from the profiling results, it seems that it happens right between the forward pass and the backward pass of the current step which make things look sequential rather than pipeline:
Screenshot 2023-09-25 at 10 49 57 AM

Am I missing something here? I would appreciate any insight on how to map the profile to the pipeline code?

from dlrm.

samiwilf avatar samiwilf commented on July 18, 2024

I would recommend performing an ablation study of the code and profiling iteratively to pinpoint which parts of the model or training loop correspond to the parts of the profile in question. You could try switching to TorchRec's nightly build, since sparse_data_dist has been replaced with start_sparse_data_dist and wait_sparse_data_dist. It may provide a more granular and informative profile. Lastly, although TorchRec is used for the embedding tables, the bottom and top MLPs are still ordinary PyTorch DDP modules. They use allreduce for gradient averaging.

from dlrm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.