Giter Site home page Giter Site logo

[BUG] Version >0.14.0 leads to `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!` about deepspeed HOT 15 OPEN

pacman100 avatar pacman100 commented on June 2, 2024 3
[BUG] Version >0.14.0 leads to `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!`

from deepspeed.

Comments (15)

bug-fixed avatar bug-fixed commented on June 2, 2024 1

@tjruwase please try this example (https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune.sh) with zero3-offload. Thanks.

from deepspeed.

lihe07 avatar lihe07 commented on June 2, 2024 1

I found a workaround. Just manually patching your runtime/zero/stage3.py according to PR 5461 will fix everything.

from deepspeed.

loadams avatar loadams commented on June 2, 2024

Hi @pacman100 - thanks for making this issue here to better track it. Does this also happen with the latest changes in the master branch?

from deepspeed.

pacman100 avatar pacman100 commented on June 2, 2024

Hi @pacman100 - thanks for making this issue here to better track it. Does this also happen with the latest changes in the master branch?

I confirm that test passes when using the master branch.It would be great to have a patch release if possible.

from deepspeed.

bug-fixed avatar bug-fixed commented on June 2, 2024

The problem will exist in Zero3-offload. It seems the problem lies in the partition parameter's part in Zero3 if the model has multiple parallel modules or frozen parameters, the offload procedure cannot load correct parameter mapping. Please take a look and fix it. Thanks.

from deepspeed.

tjruwase avatar tjruwase commented on June 2, 2024

@bug-fixed, are you able to share repro for zero3-offload case? Thanks!

from deepspeed.

jomayeri avatar jomayeri commented on June 2, 2024

@bug-fixed that repro does not work. Please provide a more precise single script reproduction.

from deepspeed.

bug-fixed avatar bug-fixed commented on June 2, 2024

@jomayeri , thanks for the response. The file needed in the script can be downloaded in here: https://huggingface.co/liuhaotian/llava-v1.5-mlp2x-336px-pretrain-vicuna-13b-v1.5/tree/main.

Unfortunately, I think it's difficult for me to prepare a more concise script, apologize for this. I checked the model with only Llama-3, the Zero3-offload works fine. But when I tested it using the script above, i.e., equipped with a vision transformer and another simple linear module, the problem occurred. I guess many factors may lead to the problem. Please note that the conclusion in my previous comment might be wrong because of my very limited knowledge in DeepSpeed. I have the following partial error information for your check:

**deepspeed_aio:   fstat for read failed on /lscratch/26730337/offload/param/zero_stage_3/bfloat16params/rank0/291_param.tensor.swp error = 2**

[cn1112:0]:  File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/engine.py", line 1582, in _configure_zero_optimizer                                                    [25/1960]
[cn1112:0]:    optimizer = DeepSpeedZeroOptimizer_Stage3(                                                                                                                                                           
[cn1112:0]:                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[cn1112:0]:  File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 362, in __init__
[cn1112:0]:    self._setup_for_real_optimizer()
[cn1112:0]:  File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 472, in _setup_for_real_optimizer
[cn1112:0]:    self._create_fp32_partitions()                                                             
[cn1112:0]:  File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 845, in _create_fp32_partitions
[cn1112:0]:    self._swap_in_sub_group_to_flat_buffer(unpinned_fp32_buffer, i)                                                                                                                                      
[cn1112:0]:  File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/zero/stage3.py", line 762, in _swap_in_sub_group_to_flat_buffer
[cn1112:0]:    param.nvme_swapper.swap_in([param], async_op=False) 
[cn1112:0]:  File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 306, in swap_in
[cn1112:0]:    swap_in_tensors(self.aio_read_handle, swap_in_buffers, swap_in_paths)
[cn1112:0]:  File "/vf/users/Panaji/anaconda3/envs/th21_ds/mypkgs/DeepSpeed_0142/deepspeed/runtime/swap_tensor/utils.py", line 21, in swap_in_tensors
[cn1112:0]:    assert (swap_handle.async_pread(buffer, path) == 0)

It shows that some parameter file was not saved in the storage. I guess one possible reason is that it failed to build the correct parameter mapping.

My Zero3-offload is:

"zero_optimization": {
  "stage": 3,
  "offload_optimizer": {
    "device": "nvme",
    "nvme_path": "/lscratch/26730337/offload/optimizer",
    "pin_memory": true,
    "ratio": 0.2,
    "buffer_count": 4,
    "fast_init": false
  },
  "offload_param": {
    "device": "nvme",
    "nvme_path": "/lscratch/26730337/offload/param",
    "pin_memory": true,
    "buffer_count": 5,
    "buffer_size": 1e9,
    "max_in_cpu": 1e9
  },
  "overlap_comm": true,
  "contiguous_gradients": true,
  "sub_group_size": 1e9,
  "reduce_bucket_size": "auto",
  "stage3_prefetch_bucket_size": 0,
  "stage3_param_persistence_threshold": "auto",
  "stage3_max_live_parameters": 1e9,
  "stage3_max_reuse_distance": 0,
  "gather_16bit_weights_on_model_save": true
},

from deepspeed.

bug-fixed avatar bug-fixed commented on June 2, 2024

@tjruwase I have updated my comment, please kindly check it. Thanks.

from deepspeed.

jomayeri avatar jomayeri commented on June 2, 2024

@bug-fixed Does the same thing happen when you offload to CPU?

from deepspeed.

lihe07 avatar lihe07 commented on June 2, 2024

@jomayeri Just encountered this problem. I use CPU offloading, and here is my deepspeed config:

    "zero_optimization": {
        "stage": 3,
        "offload_param": {
            "device": "cpu",
            "pin_memory": true
        },
        "offload_optimizer": {
            "device": "cpu",
            "pin_memory": true
        },
        "overlap_comm": true,
        "contiguous_gradients": true,
        "sub_group_size": 1e9,
        "reduce_bucket_size": "auto",
        "stage3_prefetch_bucket_size": "auto",
        "stage3_param_persistence_threshold": "auto",
        "stage3_max_live_parameters": 1e9,
        "stage3_max_reuse_distance": 1e9,
        "stage3_gather_16bit_weights_on_model_save": true
    }

The specific traceback is

    ret_val = func(*args, **kwargs)  File ".../site-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads

              ^^^^^^^^^^^^^^^^^^^^^
  File ".../site-packages/deepspeed/runtime/zero/stage3.py", line 2047, in step
    self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cpu!
    self.unscale_and_clip_grads(sub_group_id, scaled_global_grad_norm)
  File ".../site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^
  File ".../site-packages/deepspeed/runtime/zero/stage3.py", line 2117, in unscale_and_clip_grads
    self.fp32_partitioned_groups_flat[sub_group_id].grad.mul_(1. / combined_scale)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

from deepspeed.

loadams avatar loadams commented on June 2, 2024

I found a workaround. Just manually patching your runtime/zero/stage3.py according to PR 5461 will fix everything.

@lihe07 - so using the latest deepspeed built from source works? You don't hit any issues with Zero stage 3?

from deepspeed.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.