Giter Site home page Giter Site logo

Comments (7)

sathyaphoenix avatar sathyaphoenix commented on April 18, 2024 3

I'd recommend not spinning multiple instances of cachelib and instead use the pools to partition the memory available. The motivation for this is in-line with what @sjoshi6 brought up. Copying 32 bytes might not be a significant cpu impact compared to leaving fragmented memory that is harder to manage across multiple instances. If you'd like to not use std::string (since it uses heap allocation past 20 bytes and incurs a malloc + copy), you can still allocate memory on stack, copy the contents and wrap the stack memory into a Item::Key to call the find apis as long as the calls happen in the same call stack. Lots of performance critical applications do this trick.

from cachelib.

therealgymmy avatar therealgymmy commented on April 18, 2024

How do we know the cache is created? Do we only need to check cache == nullptr?

This will throw if there're bad configs. (You will see std::invalid_argument()). Cache is created if the constructor didn't throw. Typically we recommend you create cache as a std::unique_ptr<...>, so it is easy for you to move it around and destroy.

Is there any easier way to accommodate the overhead?

Can you clarify this question more? What overheads are you referring to?

Is there any better idea to support the feature that same key in different pools?

We currently do not support this. Because we share the index across all memory pools, the keys must be unique. On insertion, theoretically we can allow you to pass us two keys (prefix + actual key) and we just memcpy them into the item memory. However, on lookup, we have to concatenate because we need the key to be (prefix + actual key). And, I suspect it is the lookup that's the most expensive here (which we don't have a good way to solve it).

Have you measured how much perf overhead this is? (If keys are small <15 bytes, this shouldn't incur heap allocation if using std::string). If overhead is too much, you should consider using multiple CacheLib instances instead of cache pools.

from cachelib.

tangliisu avatar tangliisu commented on April 18, 2024

Thank you for the quick reply!

Can you clarify this question more? What overheads are you referring to?

I think we can't directly add pool in the following way
cache->addPool(name1, 30GB); cache->addPool(name2, 15GB)
my understanding is fixed overhead needed to manage cache in cachelib so the memory allocated to two pools is less than 45GB.
Then the way I allocate memory to pool1 and pool2 is

cache->addPool(name1, cache_->getCacheMemoryStats().cacheSize * 30 / (30 + 15))
cache->addPool(name2, cache_->getCacheMemoryStats().cacheSize * 15 / (30 + 15))

but i am not sure if it is a good practice. How do you always set the pool size when multiple pools are needed

Have you measured how much perf overhead this is? (If keys are small <15 bytes, this shouldn't incur heap allocation if using std::string). If overhead is too much, you should consider using multiple CacheLib instances instead of cache pools.

We didn't do the perf test right now but will do it soon. Does using multiple cachelib instances impact the perf or have other disadvantage compared to using multiple cache pools in a Cachelib instance?

from cachelib.

sjoshi6 avatar sjoshi6 commented on April 18, 2024

There are some experimental cachelib features which might be very useful for us in the future such as "Automatic pool resizing", "Memory Monitor". I believe we won't be able to leverage them well, if we have multiple cachelib instances in a process.

A typical key-size we encounter is ~32bytes.

from cachelib.

tangliisu avatar tangliisu commented on April 18, 2024

@sathyaphoenix that's a great suggestion, thank you! I have another question on persistent cache. If we want to enable persistent cache, we have to set something as follows.

config.enableCachePersistence(path);
Cache cache(Cache::SharedMemNew, config);

Does this local path store metadata only or store the whole old cache instance? If only storing metadata, could i expect the size of metadata is very small?

from cachelib.

sathyaphoenix avatar sathyaphoenix commented on April 18, 2024

Does this local path store metadata only or store the whole old cache instance? If only storing metadata, could i expect the size of metadata is very small?

It is only storing some metadata, which should be less than a KB. You can have this be on any file system and is not performance critical. All the data and any heap metadata is persisted either through shared memory or on-device. The metadata stored within files in the cache directory is some limited information to recover all other pieces of information.

from cachelib.

sathyaphoenix avatar sathyaphoenix commented on April 18, 2024

@tangliisu I'll close this ticket since the original questions are answered. Please feel free to open a new one if you have any additional questions :)

from cachelib.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.