Giter Site home page Giter Site logo

Comments (9)

ashimdey052 avatar ashimdey052 commented on July 22, 2024

Hello Sir,
Merry Christmas and Happy new year! I have able to solve the problem during running HR_EDGE_CACHE', 'HR_ON_PATH'. But when I try to run 'HR_CLUSTER' on real topologies like 'GEANT', 'WIDE','GARR' , an error shows ValueError: There are nodes not labelled with cluster information. Should I run this on different topologies?
How can I resolve it, sir?

from icarus.

lorenzosaino avatar lorenzosaino commented on July 22, 2024

Hi Ashim, glad to hear you were able to solve your first problem.

The reason for this issue is that, when using HR_CLUSTER as a strategy, you also need an algorithm that assigns cache nodes to clusters. See sec. 4.C of this paper. That error message emerged because the HR_CLUSTER strategy expects the topology to include data telling which cache node belongs to each cluster, but it didn't.

To do this, you need to use a cache placement algorithm that also assigns cache nodes to clusters. You can create one of your own or use can use the CLUSTERED_HASHROUTING cache placement algorithm implemented here:

@register_cache_placement('CLUSTERED_HASHROUTING')
def clustered_hashrouting_cache_placement(topology, cache_budget, n_clusters,
policy, distance='delay', **kwargs):
"""Deploy caching nodes for hashrouting in with clusters
Parameters
----------
topology : Topology
The topology object
cache_budget : int
The cumulative cache budget
n_clusters : int
The number of clusters
policy : str (node_const | cluster_const)
The expected global cache hit ratio
distance : str
The attribute used to quantify distance between pairs of nodes.
Default is 'delay'
References
----------
.. [1] L. Saino, I. Psaras and G. Pavlou, Framework and Algorithms for
Operator-managed Content Caching, in IEEE Transactions on
Network and Service Management (TNSM), Volume 17, Issue 1, March 2020
https://doi.org/10.1109/TNSM.2019.2956525
.. [2] L. Saino, On the Design of Efficient Caching Systems, Ph.D. thesis
University College London, Dec. 2015. Available:
http://discovery.ucl.ac.uk/1473436/
"""
icr_candidates = topology.graph['icr_candidates']
if n_clusters <= 0 or n_clusters > len(icr_candidates):
raise ValueError("The number of cluster must be positive and <= the "
"number of ICR candidate nodes")
elif n_clusters == 1:
clusters = [set(icr_candidates)]
elif n_clusters == len(icr_candidates):
clusters = [set([v]) for v in icr_candidates]
else:
clusters = compute_clusters(topology, n_clusters, distance=distance,
nbunch=icr_candidates, n_iter=100)
deploy_clusters(topology, clusters, assign_src_rcv=True)
if policy == 'node_const':
# Each node is assigned the same amount of caching space
cache_size = iround(cache_budget / len(icr_candidates))
if cache_size == 0:
return
for v in icr_candidates:
topology.node[v]['stack'][1]['cache_size'] = cache_size
elif policy == 'cluster_const':
cluster_cache_size = iround(cache_budget / n_clusters)
for cluster in topology.graph['clusters']:
cache_size = iround(cluster_cache_size / len(cluster))
for v in cluster:
if v not in icr_candidates:
continue
topology.node[v]['stack'][1]['cache_size'] = cache_size
else:
raise ValueError('clustering policy %s not supported' % policy)

The design of that algorithm is described in Sec. 7.E of this paper.

You can use that cache placement algorithm with every topology.

Hope this helps, please let me know if you have any further issue.

from icarus.

ashimdey052 avatar ashimdey052 commented on July 22, 2024

Thank you for replying, Sir! It works!
Actually, I want to compare and plot all available hash routing strategies in a single experiment. So, for HR_CLUSTER, I should declare default['cache_placement']['name'] = 'CLUSTERED_HASHROUTING' and for other hash routing strategies default['cache_placement']['name'] = 'UNIFORM' . How can I handle this conflict in a single experiment?
Sorry for my endless question :(

from icarus.

lorenzosaino avatar lorenzosaino commented on July 22, 2024

You can do it. There are two ways to do it, as far as I can tell.

  1. You can use the CLUSTERED_HASHROUTING cache placement everywhere, also on non-hashrouting strategies. It should work fine. If you set default['cache_placement']['policy'] = 'node_const', that effectively has the same behavior of the UNIFORM cache placement policy when used with non-hashrouting strategies.
  2. The configuration file is interpreted as standard Python code, so you can use any Python statements. You could add a conditional to use CLUSTERED_HASHROUTING for hashrouting policies and UNIFORM for others. That would look like this (you will need to adapt it for your specific config file):
if experiment['strategy']['name'] == "HR_CLUSTER":
    experiment['cache_placement']['name'] = 'CLUSTERED_HASHROUTING'
else:
   experiment['cache_placement']['name'] = 'UNIFORM'

from icarus.

ashimdey052 avatar ashimdey052 commented on July 22, 2024

Sir!
Thank you for answering with great patience. I am still uncovering Icarus and trying to make the best use of it!

from icarus.

lorenzosaino avatar lorenzosaino commented on July 22, 2024

I am closing this issue for now. If you have any further issues, please feel free to reopen it.

from icarus.

ashimdey052 avatar ashimdey052 commented on July 22, 2024

Hello Sir!
How can I find these two things from icarus:

  1. Popularity of content p (out of 1.0) at any time t
  2. Available free Cache size c (out of 1.0) of a Cache Node n at any time t

from icarus.

lorenzosaino avatar lorenzosaino commented on July 22, 2024

Hi Ashim,

  1. It depends on the workload you are using. If you are using the stationary workload, then the popularity of each item is constant over the simulation. If you are using a trace driven workload, then you need to measure it from the trace. If you need to compute the popularity online, i.e., you want your strategy to learn the popularity while the simulation is being executed, you will need to implement code to do that. There is no function to get that information at the moment.
  2. There's no method to get the current free amount of cache, typically because in simulations, cache is expected to be always filled at steady-state. If you need to compute this online and make it accessible to your strategy, the way I would go about would be to add a method to the NetworkView class that returns that. I think it shouldn't be too difficult to implement.

from icarus.

ashimdey052 avatar ashimdey052 commented on July 22, 2024
  1. Yes, Sir. I am using stationary workload (means content popularity is Zipf-distributed), then maybe the popularity of each item is constant over the simulation. But how I can find that popularity value (maybe constant overtime) for some content i at any time t during j th request? For that, if no function is available, can you provide any guidance for implementation?

  2. Can you be more specific about this implementation,Sir?

from icarus.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.