Comments (28)
from colossalai.
Bot detected the issue body's language is not English, translate it automatically. π―ππ»π§βπ€βπ§π«π§πΏβπ€βπ§π»π©πΎβπ€βπ¨πΏπ¬πΏ
from colossalai.
Hi,
Yes, I believe based on the readme you need torch 1.12 to run it. In fact some of these legacy APIs are under migration and are not guaranteed to be runnable, but I'll try some fixes tomorrow.
from colossalai.
Hi, Yes, I believe based on the readme you need torch 1.12 to run it. In fact some of these legacy APIs are under migration and are not guaranteed to be runnable, but I'll try some fixes tomorrow.
I know. But I want to deploy colossal on NVIDIA H800 GPU which only support cuda 12. Based on cuda 12, I can only install pytorch 2.0+ not 1.12.... Could you give me some further suggestions?
from colossalai.
Sorry, I think the current auto parallel is less performant and popular so we didn't adapt it to the newest version. Do you have a compelling reason to use it?
Otherwise, it's advised to use the HybridParallelPlugin or Gemini (ZeRO 3 with chunk-based memory management)
from colossalai.
I don't necessarily have to use Auto Parallel Strategy . What I mean is that the official demos provided now are all based on the Torch 1.12 API, but on H800, only Torch 2.0+ can be used, which means I can't deploy training plans on H800.
from colossalai.
Other demos should work on torch 2.0
from colossalai.
Could you give me some examples ? I have tried many training demo codes but they all failed on torch 2.0 but succeeded on torch 1.12..
from colossalai.
Could you try examples/language/gpt/gemini and examples/language/gpt/hybridparallelism?
from colossalai.
Bot detected the issue body's language is not English, translate it automatically. π―ππ»π§βπ€βπ§π«π§πΏβπ€βπ§π»π©πΎβπ€βπ¨πΏπ¬πΏ
Could you try examples/language/gpt/gemini and examples/language/gpt/hybrid parallelism?
from colossalai.
It seems that transformer api is not compatible with current colossal aiCould you try examples/language/gpt/gemini and examples/language/gpt/hybridparallelism?
from colossalai.
I have fixed this so pulling from the newest main branch should work
from colossalai.
from colossalai.
Could you either install apex from source or set enable_all_optimization=False? Thanks.
from colossalai.
I have re-compiled and re-installed apex from source and run the programs , got the following:
/usr/local/lib/python3.10/dist-packages/colossalai/nn/optimizer/hybrid_adam.py:90: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)
self._dummy_overflow_buf = torch.cuda.IntTensor([0])
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch [1/3]: 0%| | 0/57 [00:00<?, ?it/s]Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Epoch [1/3]: 0%| | 0/57 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/root/ColossalAI/examples/language/gpt/hybridparallelism/finetune.py", line 313, in <module>
main()
File "/root/ColossalAI/examples/language/gpt/hybridparallelism/finetune.py", line 293, in main
train_epoch(epoch, model, optimizer, _criterion, lr_scheduler, train_dataloader, booster, coordinator)
File "/root/ColossalAI/examples/language/gpt/hybridparallelism/finetune.py", line 147, in train_epoch
outputs = booster.execute_pipeline(
File "/usr/local/lib/python3.10/dist-packages/colossalai/booster/booster.py", line 205, in execute_pipeline
return self.plugin.execute_pipeline(data_iter, model, criterion, optimizer, return_loss, return_outputs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/booster/plugin/hybrid_parallel_plugin.py", line 1259, in execute_pipeline
outputs = self.schedule.forward_backward_step(
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/one_f_one_b.py", line 445, in forward_backward_step
result = self.run_forward_backward(model, data_iter, criterion, optimizer, return_loss, return_outputs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/one_f_one_b.py", line 365, in run_forward_backward
output_obj = self.forward_step(model, input_obj, criterion, accum_loss, outputs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/one_f_one_b.py", line 249, in forward_step
output_obj = model_forward(model, micro_batch, input_obj)
File "/usr/local/lib/python3.10/dist-packages/colossalai/pipeline/schedule/_utils.py", line 120, in model_forward
return model(**data, **internal_inputs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/booster/plugin/hybrid_parallel_plugin.py", line 197, in forward
return super().forward(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/interface/model.py", line 25, in forward
return self.module(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/modeling/gpt2.py", line 718, in gpt2_for_sequence_classification_forward
outputs = GPT2PipelineForwards.gpt2_model_forward(
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/modeling/gpt2.py", line 260, in gpt2_model_forward
outputs = block(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/gpt2/modeling_gpt2.py", line 390, in forward
attn_outputs = self.attn(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/modeling/gpt2.py", line 840, in forward
attn_output = ColoAttention.attention(query, key, value, **attention_mask, dropout_p=dropout_p, scale=scale)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/layer/attn.py", line 250, in attention
attn_func = ColoAttention._dispatch_kernel(q.dtype, mask_type)
File "/usr/local/lib/python3.10/dist-packages/colossalai/shardformer/layer/attn.py", line 98, in _dispatch_kernel
].load()
File "/usr/local/lib/python3.10/dist-packages/colossalai/kernel/kernel_loader.py", line 73, in load
assert len(usable_exts) != 0, f"No usable kernel found for {self.__class__.__name__} on the current machine."
AssertionError: No usable kernel found for FlashAttentionWithPaddingMaskLoader on the current machine.
Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
from colossalai.
You'll need to either set enable_all_optimization=False or pip install flash-attn
from colossalai.
pip install flash-attn
set enable_all_optimization for colossal or apex?
from colossalai.
from colossalai.
fix and thanks
from colossalai.
I have solved all the issues related to system env and when I re-run the program in ~/ColossalAI/examples/language/gpt/hybridparallelism I got the ProcessError:
from colossalai.
from colossalai.
when I run the examples in ColossalAI/examples/language/gpt/hybridparallelism/ using command colossalai run --nproc_per_node=2 finetune.py, I always got the following error:
On the other hand , could you show how can I run this example in multiple node(machine). Thanks ! @Edenzzzz
from colossalai.
Thanks for your issue. This is probably due to a recent transformers upgrade, so I've fixed it.
For multi-node please refer to commands in examples/language/llama/README.md
from colossalai.
Thanks for you reply. Actually, I have launched two Docker containers on two separate machines. How can I configure the Docker address in the host file
from colossalai.
Please refer to similar examples in Pytorch forum. You can either run docker in host network mode or map a port from container to host.
https://discuss.pytorch.org/t/how-to-multi-node-parallel-in-dockers-container/188736
https://discuss.pytorch.org/t/run-multi-node-training-inside-docker/167537
from colossalai.
when I run the examples in ColossalAI/examples/language/gpt/hybridparallelism/ using command bash run.sh, I always got the following error:
Failed to run torch 2.1 in Tesla V100 GPU .....
@Edenzzzz do you test this demo on V100 GPU , cuda 12.1, torch 2.1?
from colossalai.
Related Issues (20)
- [BUG], please delete this item.
- [FEATURE]: cuda 12 support HOT 2
- [BUG]: ValueError: mutable default <class 'colossalai.legacy.tensor.distspec._DistSpec'> for field dist_attr is not allowed: use default_factory HOT 1
- [BUG]: AttributeError: type object 'ColoParameter' has no attribute 'from_torch_tensor' when run hybrid_parallel example HOT 3
- [FEATURE]: Support qwen2 model
- [BUG]: OOM when saving 70B model HOT 2
- [DOC]: What is the datasetset used to train the Colossal-Llama-2? HOT 1
- [BUG]: pretraing llama2 using "gemini" plugin, can not resume from saved checkpoints HOT 1
- [BUG] [Shardformer]: Error in blip2 testing with half precision HOT 1
- [FEATURE]: support multiple (partial) backward passes for zero
- [BUG]: re-join str type error_msgs using `\n\t` in general_checkpoint_io
- how to wrapped multiple models with booster HOT 3
- [BUG]: ColossalMoE Train: AssertionError: Parameters are expected to have the same dtype `torch.bfloat16`, but got `torch.float32` HOT 1
- [PROPOSAL]: Fix potential github action smells
- Does colossalai support rocm? HOT 1
- [BUG]: Slack link is invalid HOT 1
- [BUG]: GROK-1 does not support do_sample
- [BUG]: TypeError: _gen_python_code() got an unexpected keyword argument 'verbose' HOT 2
- [BUG]: llama2 hybrid_parallel or 3d giving None loss when using pp_size > 1 HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from colossalai.