Comments (5)
Hi, what is the status for this?
I want to compare ZeRo in FP32 to my system.
There are several use-cases for FP32 where it can use less memory compared to mixed precision since in mixed precision we still keep master parameters. (so for example given 44GB model in FP32, mixed precision will hold 66GB just for the model).
if the activations are small enough than mixed precision may not be good choice (at least memory wise).
from deepspeed.
@LvanderGoten, ZeRO should work with FP32.
Please disable the fp16 flag via
"fp16": {
"enabled": false
}
and also turn on the fp32_allreduce flag.
Since we implemented ZeRO with fp16 in mind, some of the variable names may not make sense for fp32, and there will be some redundant data copies and memory allocation, but you should still be able to see up to 2x memory reduction with ZeRO in fp32.
Please give it a try and let us know if you run into any issues.
from deepspeed.
I get the following:
Traceback (most recent call last):
File "identification/generative/CPGAN/train_deepspeed.py", line 186, in <module>
main()
File "identification/generative/CPGAN/train_deepspeed.py", line 179, in main
save_every_n_steps=args.save_every_n_steps)
File "/home/deepspeed/Code/identification/generative/CPGAN/orchestrator_msggan_deepspeed.py", line 250, in train
training_data=data)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/__init__.py", line 95, in initialize
collate_fn=collate_fn)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_light.py", line 126, in __init__
self._configure_with_arguments(args, mpu)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_light.py", line 324, in _configure_with_arguments
self._config = DeepSpeedConfig(args.deepspeed_config, mpu)
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_config.py", line 243, in __init__
self._do_sanity_check()
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_config.py", line 359, in _do_sanity_check
self._do_error_check()
File "/usr/local/lib/python3.6/dist-packages/deepspeed/pt/deepspeed_config.py", line 379, in _do_error_check
assert self.fp16_enabled, "DeepSpeedConfig: ZeRO is only supported if fp16 is enabled"
AssertionError: DeepSpeedConfig: ZeRO is only supported if fp16 is enabled
Using this config:
{
"train_batch_size": 8,
"gradient_accumulation_steps": 1,
"steps_per_print": 1,
"zero_optimization": true,
"fp32_allreduce": true,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.0001
}
},
"fp16": {
"enabled": false
}
}
from deepspeed.
We haven't tested ZeRO with fp32 so there may be a few issues along the way here :) If you want to try it out I would comment out the these lines to disable this check: https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/pt/deepspeed_config.py#L378-L379
from deepspeed.
Feel free to re-open if you run into other issues or let us know if this solved your problem.
from deepspeed.
Related Issues (20)
- [REQUEST] Does Universal Checkpoint supports for MoE Checkpoint? HOT 3
- Different seeds are giving the exact same loss on Zero 1,2 and 3 during multi gpu training [BUG]
- [BUG] fp16 not supported for CPU? HOT 1
- Issue with LoRA Tuning on llama3-70b using PEFT and TRL's SFTTrainer
- [REQUEST] Asynchronous Checkpointing HOT 1
- [BUG] ImportError: /home/nlp/.cache/torch_extensions/py310_cu121/cpu_adam/cpu_adam.so: cannot open shared object file: No such file or directory HOT 1
- CUDA error: no kernel image is available for execution on the device [BUG]
- lr scheduler defined in config cannot be overwritten by lr scheduler defined in code and pass to `deepspeed.initialize` [BUG]
- [BUG] PipelineEngine calculates loss with outputs and labels from different batches. HOT 1
- [BUG] Learning rate scheduler and optimizer logical issue
- In distributed training, in order to continue training, an error occurred when loading model checkpoints after saving them.
- DS communication issue when using NCCL backend: All_reduce instead of reduce_scatter (or several reduce ops) HOT 5
- [BUG] I can't run fp8 with pipeline parallel HOT 2
- [BUG] Multi-gpu stuck when the computation graph is not complete for wach process.
- [BUG] Multi-node fine-tuning with thunderbolt HOT 1
- Multi-node multi-GPUs training is slower than single-node multi-GPUs training[BUG] HOT 2
- Default libcurand path fails HOT 1
- [BUG] Universal checkpoint conversion - "Cannot find layer_01* files in there"
- test
- how to set "training_step" during training?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed.