Giter Site home page Giter Site logo

Comments (8)

lia54 avatar lia54 commented on July 19, 2024 1

Hi there! to run the visual.py you will need to use a config file similar to the file visual.config, adding the number of subplots you will like to have, normally you can plot hit-rate, the weights for LeCaR and the Cacheus variants, as well as the p parameter in ARC, the size of SR (q queue) in Cacheus, and some other internal parameters. One consideration for generating these plots will be the window size used. Right now that is set to 10% of the total number of requests (see the call to the get_algorithm in the run function inside visual.py). This number can be varied to get a good representation depending on the workload.

Regarding the pollutionator.py file, this is originally created to plot the pollution vs time for different algorithms. Pollution is defined as the number of items in cache that have been accessed at least twice in the past, but they are not among the last 2N unique accesses. You can expect LRU to have 0 pollution and LFU to have higher values of pollution.

from cacheus.

lia54 avatar lia54 commented on July 19, 2024 1

Hey, one way to think about churn patterns is in terms of frequency. Items belonging to a churn workload will have the same frequency. In that sense, LeCaR can only respond to this situation by leveraging the LFU capabilities, which will prioritize items with higher frequency, and therefore they will stay in the cache longer. However, for LeCaR starting to recognize that the right choice for churn periods is LFU it will take some time before the weights get affected (the delay in getting enough feedback which means hits in history). That means that in the worst-case scenario if LeCaR is using LRU to evict just before the churn period starts then the delay on switching towards LFU can incur some extra misses, but generally speaking it can handle churn by relying on LFU.

from cacheus.

lia54 avatar lia54 commented on July 19, 2024 1

Hey there, sorry for the late reply.

In the case of CR-LFU, we can see improvement over vanilla LFU because of the way items get evicted from the churn pattern. I mentioned earlier that items belonging to a churn period have the same frequency. So, what CR-LFU does differently is choosing the MRU item (as you said item that has been recently accessed) among all the items that have the lowest and same frequency (items belonging to the churn period). This helps because with churn patterns we should just maintain in cache the maximum number of items possible. In fact, if we run OPT (offline optimal caching algorithm) for this kind of pattern, we see that the right decision to achieve the highest hit-rate possible is to just keep replacing the MRU item to accommodate new incoming items and leave the rest of the items intact in cache.

Regarding why LeCaR is able to perform better for small cache sizes, again the explanation is because of using LFU. As you mentioned, when cache sizes are smaller it is important to maintain in cache the highly frequent items. We kind of wanted to convey that idea in the paper when we said: "storage working sets are telescoping in nature with larger subsets of items accessed at a lower frequency often each entirely subsuming one or more smaller subsets of items accessed at a higher frequency". Also, we found that for small cache sizes, LRU-friendly workloads become churn workloads as shown in the paper which is better handled by LFU.

For the explanation of churn as a sum of multiple scans, yes it can be seen that way, but also it is important to understand that churn does not necessarily need to be multiple scans. It can be a randomly distributed subset of items that get repeated after the same period of time.

Thanks for the interest!

from cacheus.

zy1024cs avatar zy1024cs commented on July 19, 2024 1

OK, thank you for your patience. I will experience the ideas carefully. Best wishes for you.

from cacheus.

zy1024cs avatar zy1024cs commented on July 19, 2024

Well, thank you for your answer. In addition, I found that your previous research work, lecar, used to solve the trace of Churn characteristics in slide(page 4). After reading the related articles about lecar in 2018, I still can't understand why lecar can solve the problem of Churn. Could you help me with the answer. Thank you again for your kind guidance!

QQ截图20210421225812

from cacheus.

zy1024cs avatar zy1024cs commented on July 19, 2024

So can I understand like this? Lecar can overcome churn because it has the characteristics of LFU, so it is better than arc or lirs in the working type of churn (because arc or lirs itself does not support LFU very well). In addition, LFU-CR in cache is better than LFU in the type of churn, is it because LFU-CR chooses the block with the least access frequency and the latest access time when driving out the block? In other words, the block that has just been accessed and has the lowest access frequency will be evicted first.
Also ask why lecar outperforms others when the cache size is small? Is it because when the cache size itself is small, all the access patterns are similar to churn, that is, the requests of different blocks tend to be randomly distributed.
The last is the access mode of the churn. If you intercept one of the time periods for churn, as if they are similar to scan, then in fact, for the whole churn, it seems to be the splicing process of multiple scans in time (The schematic diagram is shown below). I don't know if it can be understood in this way.
Thank you again for your kind reply!

QQ截图20210422191339

from cacheus.

lia54 avatar lia54 commented on July 19, 2024

Thanks! :-)

from cacheus.

sylab avatar sylab commented on July 19, 2024

Closing due to no activity.

from cacheus.

Related Issues (10)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.