Comments (8)
This is correct, it is the responsibility of the custom loader to load the appropriate dataset portion based on local rank (or whatever else the client finds appropriate) into GPU memory. DeepSpeed does not impose any restrictions on the custom dataloader, and does not perform any sanity checks.
Hope that clears things up.
from deepspeed.
-
Passing training_data to deepspeed.initialize is optional, and not required. Some models can benefit from deepspeed's i/o optimizations. However, using torch trainloader is fine.
-
In our experience, deepspeed works well with custom trainloaders and datasets. So I suspect the answer to that is yes, but please share any issues you run into.
-
The train_batch_size in json file is used to perform gradient accumulation and compute progress statistics, so a mismatch could result in incorrect training and confusing statistics.
-
Yes, this is supported.
from deepspeed.
@tjruwase could you clarify - how does the parallelization work with a custom dataloader? Do you need to make sure the dataloader uses the local rank as input to load a separate portion of the dataset manually?
from deepspeed.
@tjruwase Thanks for the clarification.
Just one more question.
Assuming I have a custom data generator like that:
for batch_idx, batch in enumerate(DatasetGenerator):
data, target,src_padding = batch['input'].to(model_engine.local_rank), batch['target'].to(model_engine.local_rank), batch['padding_mask'].to(model_engine.local_rank)
Does this mean:
- The batch size should be equal to train_micro_batch_size_per_gpu ?
- It should provide different/random batch for each gpu/node ?
from deepspeed.
-
Yes, batch_size should be equal to train_micro_batch_size_per_gpu, which is batch size for a single step on one gpu.
-
Assuming DatasetGenerator is returning the correct batch for each gpu, then this would be correct since the .to() is simply moving the data bits into gpu memory.
from deepspeed.
@tjruwase Thanks a lot that answers all my questions for now :)
from deepspeed.
@tjruwase assuming I'm using some customer data loader as @agemagician above but in a multi-node, multi-gpu settings, how would I go about and send tensors to the right GPU? Do I still do tensor.to(engine.local_rank)
or global_rank
please? Thanks.
from deepspeed.
@tnq177, I believe you want local_rank
as crossing node boundaries requires communication collectives, like broadcast
.
from deepspeed.
Related Issues (20)
- [REQUEST] Does Universal Checkpoint supports for MoE Checkpoint? HOT 3
- Different seeds are giving the exact same loss on Zero 1,2 and 3 during multi gpu training [BUG]
- [BUG] fp16 not supported for CPU? HOT 1
- Issue with LoRA Tuning on llama3-70b using PEFT and TRL's SFTTrainer
- [REQUEST] Asynchronous Checkpointing HOT 1
- [BUG] ImportError: /home/nlp/.cache/torch_extensions/py310_cu121/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory HOT 1
- CUDA error: no kernel image is available for execution on the device [BUG]
- lr scheduler defined in config cannot be overwritten by lr scheduler defined in code and pass to `deepspeed.initialize` [BUG]
- [BUG] PipelineEngine calculates loss with outputs and labels from different batches. HOT 1
- [BUG] Learning rate scheduler and optimizer logical issue
- In distributed training, in order to continue training, an error occurred when loading model checkpoints after saving them.
- DS communication issue when using NCCL backend: All_reduce instead of reduce_scatter (or several reduce ops) HOT 5
- [BUG] I can't run fp8 with pipeline parallel HOT 2
- [BUG] Multi-gpu stuck when the computation graph is not complete for wach process.
- [BUG] Multi-node fine-tuning with thunderbolt HOT 1
- Multi-node multi-GPUs training is slower than single-node multi-GPUs training[BUG] HOT 2
- Default libcurand path fails HOT 1
- [BUG] Universal checkpoint conversion - "Cannot find layer_01* files in there"
- test
- how to set "training_step" during training?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.