Giter Site home page Giter Site logo

Comments (38)

pentschev avatar pentschev commented on September 17, 2024 1

What I found out that memory is really the issue, in the case described here, the GPU has 16GB of memory. Trying that example with device_memory_limit='10000 MiB' fails, and just before failing the real GPU memory utilization was at 16GB, despite the LRU being under 10GB. Reducing to device_memory_limit='5000 MiB' completes successfully, but takes 120 minutes. Raising back to device_memory_limit='10000 MiB' but reducing from chunksize='4096 MiB' to chunksize='1024 MiB' also finishes here, taking 71 minutes.

So what’s happening is that cuDF will allocate more memory that is proportional to the chunksize, which makes sense. As of now, I don’t know if there’s a better/safer way of keeping track of the entire device memory (including that managed by cuDF), so I don’t see another working solution for now other than having smaller chunks.

On side channels @VibhuJawa pointed out that the chunk sizes have a non-negligible impact on performance, so this is definitely something we want to improve in the future, but for the time being, using smaller chunk sizes is the only solution here.

from dask-cuda.

jrhemstad avatar jrhemstad commented on September 17, 2024 1

Note that with this PR from @shwina, it's even easier to set the allocator mode: rapidsai/cudf#2682

It should be as easy as just doing:

// non-pool, managed memory
set_allocator(allocator="managed")

// managed memory pool
set_allocator(allocator="managed", pool=True)

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024 1

@jrhemstad, does managed here mean cudaMallocManaged?

@jakirkham yes, the RMM documentation has details on that: https://github.com/rapidsai/rmm#cuda-managed-memory

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024 1

@jakirkham @kkraus14 and I were discussing this offline, so to set expectations straight:

“To share device memory pointers and events across processes, an application must use the Inter Process Communication API, which is described in detail in the reference manual. The IPC API is only supported for 64-bit processes on Linux and for devices of compute capability 2.0 and higher. Note that the IPC API is not supported for cudaMallocManaged allocations.”

Reference: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#interprocess-communication

from dask-cuda.

VibhuJawa avatar VibhuJawa commented on September 17, 2024 1

With TPCx-BB efforts being largely successful, I'd say this has been fixed or improved substantially, is that correct @VibhuJawa ? Are we good closing this or should we keep it open?

@pentschev , Yup, with all the TPCxbb work this has indeed improved a lot. This is good to close in my book too.

from dask-cuda.

VibhuJawa avatar VibhuJawa commented on September 17, 2024

@randerzander . FYI.

from dask-cuda.

mrocklin avatar mrocklin commented on September 17, 2024

@pentschev care to try the example above?

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

Sorry, I totally missed this before. I checked the example and I can reproduce. I believe we still have bugs on the memory spilling, which wasn't really tested before besides my synthetic test case. This is now on the top of my priority list, since now we seem to have more people who need this and we would like to have this working properly for 0.8.

from dask-cuda.

jrhemstad avatar jrhemstad commented on September 17, 2024

Could you try configuring RMM to use managed memory and see how that works?

See: https://github.com/rapidsai/rmm/blob/dfa8740883735e57bc9ebb95ed56a1321141a8b0/README.md#handling-rmm-options-in-python-code

You would use

from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_managed_memory = true

before import cudf

For a managed memory pool, you can do:

from librmm_cffi import librmm_config as rmm_cfg
rmm_cfg.use_managed_memory = true
rmm_cfg.use_pool_allocator = True

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

I've finally got a chance to test this and I can confirm enabling RMM's managed memory works for me. I simply made sure each worker enables managed memory, and the adapted code from #57 (comment) looks like this:

from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
import cudf, dask_cudf

# Use dask-cuda to start one worker per GPU on a single-node system
# When you shutdown this notebook kernel, the Dask cluster also shuts down.
cluster = LocalCUDACluster(ip='0.0.0.0',n_workers=1, device_memory_limit='10000 MiB')
client = Client(cluster)
# # print client info
print(client)

# Code to simulate_data

def generate_file(output_file,rows=100):
    with open(output_file, 'wb') as f:
        f.write(b'A,B,C,D,E,F,G,H,I,J,K\n')
        f.write(b'22,697,56,0.0,0.0,0.0,0.0,0.0,0.0,0,0\n23,697,56,0.0,0.0,0.0,0.0,0.0,0.0,0,0\n'*(rows//2))
        f.close()

# generate the test file 
output_file='test.csv'
# Uncomment below
generate_file(output_file,rows=100_000_000)

def initialize_rmm():
    from librmm_cffi import librmm_config as rmm_cfg
    import cudf
    cudf.rmm.finalize()
    rmm_cfg.use_managed_memory = True
    return cudf.rmm.initialize()

client.run(initialize_rmm)

# reading it using dask_cudf
df = dask_cudf.read_csv(output_file,chunksize='100 MiB')
print(df.head(10).to_pandas())


# reading it using dask_cudf
df = df.sort_values(['A','B','C'])

Similarly, the code in #65 (comment) works too.

@VibhuJawa could you try it out as well?

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

One additional comment, I did also some profiling on chunksize, setting it to four different values, 100 MiB, 200 MiB, 500 MiB and 1000 MiB.

First, without setting a device_memory_limit I get 3m45s, 2m50s, 1m36s and 1m23s for the values above, respectively. Then setting device_memory_limit=10000 MiB, there's little change, staying in the 4m20s-4m40s, in other words, Dask today is in general slower to move data to/from host, but will allow it to move also to disk.

Since the copying of data to host in Dask requires allocating NumPy arrays in host memory, this is one of the limiting factors (which when using managed memory on C++ libraries, such as cuDF, is essentially free since we don't necessarily have to pay the price of memory allocation). However, I worked on a fix for NumPy that was released only in 1.17.1 (numpy/numpy#14216 -- not yet available through conda, only pip) and it reduces execution time from 4m20s-4m40s down to 3m10s-4m10s.

@mrocklin these numbers may interest you as well.

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

@jrhemstad, does managed here mean cudaMallocManaged?

from dask-cuda.

VibhuJawa avatar VibhuJawa commented on September 17, 2024

CC: @beckernick , see thread for spill-over discussion.

from dask-cuda.

datametrician avatar datametrician commented on September 17, 2024

@kkraus14 @mrocklin @quasiben how can we solve this quickly...

from dask-cuda.

quasiben avatar quasiben commented on September 17, 2024

We just spent time chatting about resolving this issue with @beckernick . It was suggested to try CUDA Managed Memory. This solution has the draw back of not being supported with UCX message passing and thus the user would have to use TCP. Estimate for UCX support of CUDA Managed Memory will probably happen near the end of this year though this can potentially change with different resource allocation

from dask-cuda.

datametrician avatar datametrician commented on September 17, 2024

What else can we do in the short term?

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

I don't think we have any real alternatives to using managed memory to solve this issue, but @jrhemstad may have ideas on how we could emulate managed memory somehow on the RMM side without actual managed memory.

If we don't have any real alternatives as I believe to be the case, I think the best way to handle this is to focus on speeding up managed memory support in UCX.

from dask-cuda.

mrocklin avatar mrocklin commented on September 17, 2024

from dask-cuda.

datametrician avatar datametrician commented on September 17, 2024

SOL we can sort 1TB on 16 GPUs in ~70 seconds right now. I believe MapR holds the record on 1004x (4-core) nodes at ~50 seconds. Given a DGX-2 has <60 cores seems like SOL to SOL favors GPU.

from dask-cuda.

mrocklin avatar mrocklin commented on September 17, 2024

from dask-cuda.

datametrician avatar datametrician commented on September 17, 2024

That is from disk (csv) and writing back to disk.

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

What if we copied managed memory over to non-managed memory before sending it?

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

What if we copied managed memory over to non-managed memory before sending it?

I would then argue that we will increase the memory footprint in such a case, and that's probably going to limit problem sizes even more. That said, my bias/preferred action would continue to be on working for managed memory support in UCX sooner.

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

We could reuse the same buffer to copy into. Admittedly it will still increase the memory footprint, but by a fixed amount. It may also help with other issues (buffer registration).

Agree this is a workaround. However if we are trying to do something sooner than fixing UCX, this could be one path to doing that.

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

I see now that you're suggesting a fixed memory footprint, which didn't occur to me before. In that case, there's still a big issue: for larger buffers, we would need to do multiple copies, that would incur in various synchronization steps and decrease performance too. I'm very inclined to believe that the gain we would have by using managed memory in this case would be consumed by this behavior.

I'm not opposed to someone trying that out if one has the bandwidth to do it, but I still think that time would be better spent working directly with UCX folks to solve the issue at core.

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

So I may be misunderstanding, but the motivation for using managed memory was to avoid a crash not improve performance. Is that correct? If so, I don't think we need to be concerned about degraded performance because it would still run (which is infinitely better 😉). In any event, it is very difficult to reason correctly about how much of a performance penalty one would take. Much easier to run it and benchmark it.

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

It's actually twofold: there may be an OOM crash, but managed memory has also a significant improvement in performance with TCP, which I can't explain but it's very useful.

Apart from that, there's another issue I forgot to mention -- which is arguably more important -- and is that we can't currently have any managed memory if we're using UCX, all those allocations will be captured by UCX. IOW, the application must not contain any managed memory, if it does the application crashes as we have no way to enforce UCX not to capture that memory.

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

It's actually twofold: there may be an OOM crash, but managed memory has also a significant improvement in performance with TCP, which I can't explain but it's very useful.

If that's the case, could we just only use TCP with managed memory? Or is there a downside here?

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

If that's the case, could we just only use TCP with managed memory? Or is there a downside here?

I'm not sure I understand your question. If we use TCP, then there's no NVLink, which means transfers are limited by TCP bandwidth.

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

...managed memory has also a significant improvement in performance with TCP...

Maybe I'm not understanding. Is this useful or should we ignore it?

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

Maybe I'm not understanding. Is this useful or should we ignore it?

With TCP it's useful, but for the reasons we discussed above, we can't use managed memory with UCX, so in this last case it's not useful (even though my initial hopes were that it would work and we would see a performance improvement with UCX too).

from dask-cuda.

kkraus14 avatar kkraus14 commented on September 17, 2024

managed memory has also a significant improvement in performance with TCP, which I can't explain but it's very useful.

This is likely that managed memory is optimizing the device <--> host memory transfer which can provide a 4x speedup over a naive non-pinned device <--> host memory transfer.

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

@kkraus14 by "optimizing" you mean that it internally uses pinned memory or are you suggesting there are even further optimizations?

from dask-cuda.

kkraus14 avatar kkraus14 commented on September 17, 2024

Internally it uses pinned memory and then it tries to be smart about how it prefetches memory and overlaps the transfers with compute.

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

Yeah we might look at doing this (spilling to pinned memory) ourselves. There has been some discussion about adding some sort of pinned memory support with RMM. Though there is likely some work needed on the dask-cuda side to spill to pinned memory. Not entirely sure what this will look like yet.

from dask-cuda.

beckernick avatar beckernick commented on September 17, 2024

cc @kkraus14 @quasiben @rjzamora @VibhuJawa @randerzander

from dask-cuda.

jakirkham avatar jakirkham commented on September 17, 2024

If we do look into spilling to pinned memory with RMM, issue ( rapidsai/rmm#260 ) is tracking the relevant work needed/done.

from dask-cuda.

pentschev avatar pentschev commented on September 17, 2024

With TPCx-BB efforts being largely successful, I'd say this has been fixed or improved substantially, is that correct @VibhuJawa ? Are we good closing this or should we keep it open?

from dask-cuda.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.