Comments (4)
After testing, I found the following:
-
Reducing the model size (e.g., the original 32-layer LLaMA 7B reduced to 16 layers) prevents the loss from becoming NaN.
-
Switching from BF16 to FP16 also prevents the loss from becoming NaN.
-
When the loss becomes NaN, there's no protection mechanism, which causes all model parameters to turn into NaN.
-
When Sequence Parallel is enabled, the BF16 Optimizer might overflow under certain circumstances, potentially due to computational errors.
-
Observing the trend of loss change in FP16 training is still ongoing.
from megatron-deepspeed.
hi, @anogkongda, I also encountered the NAN issue and resolved it with this #399, could you try this. Can it solve your problem?
from megatron-deepspeed.
hi, @anogkongda, I also encountered the NAN issue and resolved it with this #399, could you try this. Can it solve your problem?
thank you, I will try this and report my result ASAP.
from megatron-deepspeed.
It doesn't work in my case. I'm trying more to make it correct.
from megatron-deepspeed.
Related Issues (20)
- Fine-tune llama2 with sequence parallelism HOT 3
- Bugs in GPT2 Inference Example HOT 3
- [REQUEST] Could you add a new release version tag to Megatron-Deepspeed?Thanks HOT 2
- [BUG] Problems with Mixture-of-Experts (MoE) HOT 1
- Pipeline parallelism + CPU offload?
- Assertion failure when there are more than 255 tokenized data files (assert num_datasets < 255 in blendable_dataset.py)
- AttributeError: 'Namespace' object has no attribute 'deepspeed_config_dict'. Did you mean: 'deepspeed_config'? && batch = next(self.data_iterator) HOT 1
- Expert deepcopy raises PickleError
- Call for Conversion from Huggingface to Megads with MoE
- Spurious all gather performance drop.
- 屎山代码DeepSpeed HOT 3
- about the optimizer param group
- Inquiry on Sequence Parallel Support for VocabParallelEmbedding
- why moe can not use zero3
- MOE TFLOPS calculation
- How to resume training between GPTModel() checkpoint and GPTModelPipe() checkpoint?
- Add a basic check for formatting or python compile to Megatron-DeepSpeed
- Bug: TP=1, pretrain_llama2_distributed failed on H800 gpus! HOT 1
- A tutorial to help you finetune LLama-2-7b using this repository full of garbarge code with ZeRO2/3 enabled. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from megatron-deepspeed.