Giter Site home page Giter Site logo

ccache's Introduction

Hi.

Currently focusing on logdk, a log aggregation framework written in Zig and powered by DuckDB.

Otherwise, I've largely been writing Zig libraries. If you're interested in learning Zig, check out my Learning Zig series of blog posts.

I also have a few Elixir libraries you might find useful:

And a few low-friction services:

My most popular repos are:

Semi-retired, but always open to hearing about opportunities. Contact on my blog.

ccache's People

Contributors

alexejk avatar bep avatar bluemonday avatar buglloc avatar dvdplm avatar edwardbetts avatar eli-darkly avatar gopalmor avatar heyitsanthony avatar idsulik avatar imxyb avatar jdeppe-pivotal avatar jonathonlacher avatar karlseguin avatar matthewdale avatar miparnisari avatar nwtgck avatar primalmotion avatar rfyiamcool avatar sargun avatar spicydog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ccache's Issues

Fix flaky test

Flaky test sen in https://github.com/karlseguin/ccache/actions/runs/6953182798/job/18917940507#step:4:7

go test -race -count=1 ./...
--- FAIL: Test_ConcurrentClearAndSet (0.09s)
    cache_test.go:438: expected true, got false
FAIL
FAIL	github.com/karlseguin/ccache/v3	[7](https://github.com/karlseguin/ccache/actions/runs/6953182798/job/18917940507#step:4:8).077s
?   	github.com/karlseguin/ccache/v3/assert	[no test files]
FAIL
make: *** [Makefile:17: t] Error 1
Error: Process completed with exit code 2.

Not Promote on Get

Hi. Exist the possibility of add a method like GetWithoutPromote, that only return the element. I used the cache but i refresh when its expired with a new version of the item
Thanks!

Generic key type

I was wondering why the value was made generic with the introduction of Go generics, but the key not. A quick check in the code revealed that there are some parts of the API that rely on a string as key, but overall it seems possible to support other types.

The motivation behind this is, that we try to avoid unneeded GC overhead at work, and often we have an integer as key. By making the key generic, we could avoid the extra conversion to a string just to be able to lookup values in the cache.

OnDelete is not called when an item is evicted for size reasons

I might be missing something, but it seems that the OnDelete callback is not called when an item is removed because the size limit is reached. I think we'd just need to add these lines into the gc function:

if c.onDelete != nil {
	c.onDelete(item)
}

Does that sound right?

When max size is 3, set() not delete superfluous data

When I import "github.com/karlseguin/ccache/v3", create a cache which max size is 3. After I add 4 element into cache, cache still remain all of the four element, does set() do not run lru? So in lru we should delete the first one when add the No.4 element, right?
any limit of this cache's min size?
Screenshot 2023-03-07 at 10 01 28

Fetch returns expired items

I would expect Fetch() to behave a little differently and not return stale items. Instead it seems like the fetch function argument is invoked only if the item is missing entirely from the cache.
Would you be open to PRs changing that behavior, or provide a FetchAndRefresh() that runs the fetch function both if the item is expired and missing?

Essentially this:

func (c *Cache) Fetch(key string, duration time.Duration, fetch func() (interface{}, error)) (*Item, error) {
    item := c.Get(key)
    if item != nil && !item.Expired() {
        return item, nil
    }
    value, err := fetch()
    if err != nil {
        return nil, err
    }
    return c.set(key, value, duration), nil
}

New release tag

We submitted #52 and would like to fetch the latest version. Could you please create a new release? Thank you.

Add new release

It looks like the latest Github release is from 2019, and there have been 12 commits since then with some like GetDropped and migrating to modules. Could you consider releasing new version?

About Fetch

item, err := cache.Fetch("user:4", time.Minute * 10, func() (interface{}, error) {
//code to fetch the data incase of a miss
//should return the data to cache and the error, if any
})
the function inside Fetch, currently cant accept parameters, can we let the function use key as parameter, so we can rebuild cache if not find value by key, something like that:
item, err := cache.Fetch("user:4", time.Minute * 10, func("user:4") (interface{}, error) {
//get value from db by id=user:4
//return value from db
})
Nice to have this function.

How to promote directly saved item?

I dude,
I need hight availability and Im thinking about to promote items just when it's saved, it's that posible?. I don't want to wait for get first item calls to promote it.

Memory leak during cleanup

There is a problem here. Buckets and list cleanup must be synchronized. Otherwise, if we concurrently call Set we may end up with an empty list and non-empty buckets. In this case we got memory leak like this. There is a small leak, but it's still there. At the next cleanup, these objects will be removed, but new leaking objects will appear again.

would like a version of Fetch that doesn't require a closure

This is sort of nitpicky performance-tuning stuff and I could be wrong on some of the details, but here's what I'm talking about:

Fetch currently takes a func() (interface{}, error). In other words, it doesn't pass the key (or any other parameters) to the function that computes new values. That means that the function can't be defined globally or at any other scope other than the scope that knows what the key is. It has to be a closure, or a method of some object that knows what the key is.

Due to how the Go runtime works, that has some undesirable implications if you're trying to optimize for minimal heap allocations:

  • There is at least one extra ephemeral object allocated on the heap every time you do this. The values contained in closures, as far as I can tell—in this case the key, and any other local variables that are referenced in the closure—always escape to the heap.
  • That means potentially a lot of other things also escape to the heap that wouldn't otherwise, because of how Go's escape analysis works. In other words, if you have some struct type X that has methods with pointer receivers, and you declare a value of type X within some scope, you might be able to call those methods many times within that scope without ever requiring X to be on the heap as long as no pointers to it are returned out of the scope of those methods. But if your X value is referenced inside a closure, that's no longer possible, and that can lead to a whole chain of memory allocations that wouldn't otherwise have been necessary, for things that are referenced by X.

This is a kind of thing that wouldn't come up in a language like Java where you don't really have any choice because almost everything is an object on the heap. But in high-throughput services implemented in Go, since you can control that stuff to some degree as long as you're careful, it can be a bit annoying to have one's control limited by API choices like this... when as far as I can tell there's no great reason for the API to work that way. (That is, everywhere that you are calling the fetch function, you already know what the key is, so it would be perfectly easy to pass the key to the function. And that's how cache-loading functions work in pretty much every other caching API I've seen that has such a concept.)

Problem with Race Conditions

Hi there! I'm doing some tests with this package but it seems that there's any issue when dealing with race conditions, not sure. Any advice?

func (storage *Storage) GetTokenValue(key string, t interface{}) error {
    var (
        data []interface{}
        err  error
    )

    if item := Cache.Get(key); item != nil {
        if !item.Expired() {
            data = item.Value().([]interface{})
        }
    }
    if len(data) == 0 {
        data, err = getCacheFromRedis(key)
        if err != nil {
            return err
        }
    }

    if err = redis.ScanStruct(data, t); err != nil {
        return err
    }
    Cache.Set(key, data, time.Duration(10)*time.Minute)
    return nil
}

And, when I run go test -race -i ./... I get this warning:

==================
WARNING: DATA RACE
Read by goroutine 20:
  sync/atomic.AddUint32()
      /usr/local/Cellar/go/1.3.3/libexec/src/pkg/sync/atomic/race.go:147 +0x4e
  sync/atomic.AddInt32()
      /usr/local/Cellar/go/1.3.3/libexec/src/pkg/sync/atomic/race.go:140 +0x3c
  github.com/karlseguin/ccache.(*Item).shouldPromote()
      /Users/alberto/Code/golang/src/github.com/karlseguin/ccache/item.go:70 +0x48
  github.com/karlseguin/ccache.(*Cache).conditionalPromote()
      /Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:138 +0x67
  github.com/karlseguin/ccache.(*Cache).Get()
      /Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:52 +0xfc
  github.com/backstage/backstage/db.(*Storage).GetTokenValue()
      /Users/alberto/Code/golang/src/github.com/backstage/backstage/db/storage.go:77 +0xc0
  github.com/backstage/backstage/auth.get()
      /Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token.go:130 +0x174
  github.com/backstage/backstage/auth.RevokeTokensFor()
      /Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token.go:93 +0x146
  github.com/backstage/backstage/auth.(*S).TestRevokeTokensFor()
      /Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token_test.go:54 +0x20b
  runtime.call16()
      /usr/local/Cellar/go/1.3.3/libexec/src/pkg/runtime/asm_amd64.s:360 +0x31
  reflect.Value.Call()
      /usr/local/Cellar/go/1.3.3/libexec/src/pkg/reflect/value.go:411 +0xed
  gopkg.in/check%2ev1.func·003()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:763 +0x56b
  gopkg.in/check%2ev1.func·001()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:657 +0xf4

Previous write by goroutine 7:
  github.com/karlseguin/ccache.(*Cache).doPromote()
      /Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:171 +0x64
  github.com/karlseguin/ccache.(*Cache).worker()
      /Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:152 +0xae

Goroutine 20 (running) created at:
  gopkg.in/check%2ev1.(*suiteRunner).forkCall()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:658 +0x523
  gopkg.in/check%2ev1.(*suiteRunner).forkTest()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:795 +0x168
  gopkg.in/check%2ev1.(*suiteRunner).runTest()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:800 +0x3e
  gopkg.in/check%2ev1.(*suiteRunner).run()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/check.go:606 +0x4e8
  gopkg.in/check%2ev1.Run()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/run.go:92 +0x56
  gopkg.in/check%2ev1.RunAll()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/run.go:84 +0x12d
  gopkg.in/check%2ev1.TestingT()
      /Users/alberto/Code/golang/src/gopkg.in/check.v1/run.go:72 +0x4f1
  github.com/backstage/backstage/auth.Test()
      /Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/suite_test.go:10 +0x34
  testing.tRunner()
      /usr/local/Cellar/go/1.3.3/libexec/src/pkg/testing/testing.go:422 +0x10f

Goroutine 7 (running) created at:
  github.com/karlseguin/ccache.New()
      /Users/alberto/Code/golang/src/github.com/karlseguin/ccache/cache.go:37 +0x38e
  github.com/backstage/backstage/db.init()
      /Users/alberto/Code/golang/src/github.com/backstage/backstage/db/cache.go:5 +0xc3
  github.com/backstage/backstage/auth.init()
      /Users/alberto/Code/golang/src/github.com/backstage/backstage/auth/token_test.go:57 +0xac
  main.init()
      github.com/backstage/backstage/auth/_test/_testmain.go:48 +0x93
==================

get all keys in cache

Hi,

Is there any way to get all keys the cache contains? I need to do some forced cache eviction based on a prefix of the key, and I'm having a hard time making it work by using another structure to keep track of the keys. Just being able to list all registered keys from ccache would make my life much easier.

Thanks for the nice piece of software btw

Get/Set Races

I have some items whose values are very expensive to initialize. I want to initialize each of them on the first read of the corresponding key and then cache indefinitely (unless evicted due to cache size). I also want to do this atomically in such way that reads and writes to other keys in the cache can proceed while the expensive initialization occurs, but each value is initialized at most once.

I can work around the fact that the initialization is slow by setting a placeholder value with a mutex that will be released once the initialization is complete.

What I can't figure out how to do with this API is the atomic Get/Set (without using additional locks). Would you be open to adding a TrackingGetOrSet(key string, defaultValue interface{}, duration time.Duration) (item TrackingItem item, didSet bool) method which atomically gets the key if found, or sets it to the new value if not? By looking at the code it seems fairly straightforward to implement as it can occur inside a bucket's RWMutex.

That said I can see it's a pretty specific request. I'm happy to make a PR.

PS: Is there a race condition in TrackingGet? It seems like the item could be removed from the cache between the get() and the track() calls... But maybe I'm missing something?

Thanks!

Bug report: item leak when c.promotables is busy

When setting a new key, Cache would use c.promote to add new item in c.list. But when c.promotables is full, c.promote would do nothing, which means new item would not be added into c.list.
That would cause item leak until a get operation when c.promotables is not full. If there is no operation about this key in the future before c.promote successfully takes effect, the memory of the item would never be released because of the reference from the map.

// https://github.com/karlseguin/ccache/blob/master/cache.go

// Set the value in the cache for the specified duration
func (c *Cache) Set(key string, value interface{}, duration time.Duration) {
	c.set(key, value, duration, false)
}

func (c *Cache) set(key string, value interface{}, duration time.Duration, track bool) *Item {
	item, existing := c.bucket(key).set(key, value, duration, track)
	if existing != nil {
		c.deletables <- existing
	}
	c.promote(item)
	return item
}

func (c *Cache) promote(item *Item) {
	select {
	case c.promotables <- item:
	default:
	}

}

Can't seem to figure out how layeredcache works because it doesnt work.

why doesnt the below work? also, am i doing it right? i've tried multiple different ways. there's no error shown... pls help thx.

i realised it seems to stuck at displaying the value... if not initialized. how do i resolve? I think (int) for nil will jam without error displayed

     package httpcachetesting

     import (
           "github.com/karlseguin/ccache"
           "fmt"
      )

   var (
            HttpContentCache = ccache.Layered(ccache.Configure())
     )

    func HTTPCacheGet(urlhost string, urlreq string) (int, []byte, []byte, int, bool) {

              fmt.Printf("*** Start\n")
             httpcaches := HttpContentCache.Get(urlhost, "s"+urlreq) //same as stored in redis

              fmt.Printf("*** THIS MSG IS NOT SHOWING?!?!?! V : %v\n", httpcaches.Value().(int))

            

     }

garbage collection "unbearable" for millions of entries with different entry sizes

the program is good. running fairly well but i saw the code and realised the 350 bytes associated with it.

maybe you can look into https://github.com/allegro/bigcache

and see if can reduce what you have written to something less gc intensive (you did your own gc() inside your code which seems to be highly CPU intensive given data cache of 16GB RAM of around 1 million entries... that's very "slow" at times with huge GC processing i think)

any ways to look into the code and speed it up taking GC into consideration and reducing the 350bytes further?

How to get the max performance?

Hi dude, what is the better way to get the max performance for ccache with concurrent transactions or without there?
Tks.

Add benchmarks

Go makes it really easy to write synthetic benchmarks. It would be nice if we added some to ccache, since right it's hard to now the perf impact (both in terms of CPU or allocations) of an arbitrary PR change.

Bug report: TrackingGet goroutine unsafe with onDelete func

When Running TrackingGet, func gc could insert like this:

// Used when the cache was created with the Track() configuration option.
// Avoid otherwise
func (c *Cache) TrackingGet(key string) TrackedItem {
	item := c.Get(key)
	if item == nil {
		return NilTracked
	}

// switch to gc goroutine
...
		if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 {
			c.bucket(item.key).delete(item.key)
			c.size -= item.size
			c.list.Remove(element)
			if c.onDelete != nil {
				c.onDelete(item)
			}
			dropped += 1
			item.promotions = -2
		}
...
// switch back

	item.track()
	return item
}

That would cause getting items which have been processed onDelete func.

This is a great software. Thanks a lot.

Hope it can be maintained and optimized for speed improvements (i mean garbage collection in this case, actually it's very good already being used as a cache). Overall, the feature is good enough. Thanks.

ttl not working

Sometimes cache give old result even though ttl expires.

How can we solve this situation? Could you give me some help? We do default cache configuration @karlseguin

Latest tags are not compatible with Go module versioning scheme

Go modules expects version tags to start with a v character.

Latest releases 2.0.4 and 2.0.5 does not contain it, resulting in the command go get github.com/karlseguin/ccache@latest resolve to version v2.0.3 (which does not contain a go.mod fille).

The fix would be as simple as creating the corresponding tags v2.0.4 and v2.0.5.

Thanks!

Manual or timer GC of expired items

Currently, expired items are only evicted when the cache fills to its maximum configured size.

Add an API method for clearing evicted items, possibly with a grace period (e.g. allow 30 minutes for the item to be .Extend()ed).

Document thread-safety more clearly

I imagine that it's safe to issue Get/Set/Fetch operations on the same cache from multiple threads, given the package is "aimed at high concurrency", and that the Clear method documents the opposite explicitly.

But if that's correct, might be good to call it out explicitly. Something like:

Unless otherwise noted (e.g., for Clear), methods on caches are thread-safe.

?

Stop can cause race detector errors in testing

The unit tests for our project are normally run with the -race option, because we do a lot of concurrency stuff and want to avoid subtle unsafe usages. When I integrated ccache into the project, I started getting race detector errors in a test scenario where the cache is shut down with Stop() at the end of the test.

It seems that this is due to what the race detector considers to be unsafe usage of the promotables channel, where there is the potential for a race between close and a previous channel send, as documented here. The race detector isn't saying that a race really did happen during the test run, but it can tell, based on the pattern of accesses to the channel, that one could happen— so it considers that to be an automatic fail.

I wondered why such an issue wouldn't have shown up in ccache's own unit tests, but that's because—

  1. Those tests aren't being run in race detection mode.
  2. The tests are not calling Stop at all. Like, there's no defer cache.Stop() after creating a store (so I imagine there are a lot of orphaned goroutines being created during test runs)— and also there doesn't seem be any test coverage of Stop itself.

When I added a single defer cache.Stop() to a test, and then ran go test -race ./... instead of go test ./..., I immediately got the same kind of error. In any codebase where concurrency is very important, like this one, this is a bit concerning. Even if this particular kind of race condition might not have significant consequences in itself, the fact that it's not possible to run tests with race detection means we can't use that tool to detect other kinds of concurrency problems.

Cache can't be garbage collected

Goroutine (associated with cache worker) prevent cache from being collected.
In my opinion there should be a way to stop it.
Alternatively, this fact should be documented.

Keep up the good work,
Enrico

Statistics/metrics reporting

For longer running caches it is critically important to be able to obtain operational metrics - number of cached items, rate of eviction, possibly some internal statistics. All decent caches thus provide a method to unintrusively obtain those in run-time.

Ccache should have something like that to be trusted in most production settings.

High lock contention in LayeredCache.set with few primary keys

I understand that the problem I'm describing isn't a problem for LayeredCache's intended use (as described in readme).

But I had an idea that I could use ccache as

  • 1 big cache for my entire application where I could resize the cache when I got into low memory situations
  • and use the LayeredCache to partition my cache into functional parts (which I could wipe out with cache.DeleteAll("somepartition") etc.)

But with the above I would only have a handful of primary keys, so each partition will end up in its own bucket, and I, not surprisingly, see lots of lock contention in the profiler.

I can certainly simplify my setup to not use the LayeredCache, but it would be convenient, and In my head this should be fixed if the bucket method below considered both keys in the hash:

func (c *LayeredCache) set(primary, secondary string, value interface{}, duration time.Duration, track bool) *Item {
	item, existing := c.bucket(primary).set(primary, secondary, value, duration, track)
	if existing != nil {
		c.deletables <- existing
	}
	c.promote(item)
	return item
}

func (c *LayeredCache) bucket(key string) *layeredBucket {
	h := fnv.New32a()
	h.Write([]byte(key))
	return c.buckets[h.Sum32()&c.bucketMask]
}

No exposed method to access size of the cache.

The size of cache is not an exposed variable, therefore couldn't be accessed. If we could define a method which returns the size, it would be helpful for finding overflow or cache miss.

New release implemented via generics?

I see there's an upcoming release that leverages generics.

But no accompanying issue, so thought I'd create one.

This way I can subscribe to it and get notified when it's released. 😄

Allow to change MaxSize while running

First, thanks for this library, I tested this here and it works great.

I have one challenge (which I may just skip in its first version) though, and that is how to control the size of the cache.

I understand that you can somehow control this by implementing Size() (the same strategy as used by Ristretto, but implementing that in a general way for structs/maps seems to be non-trivial. For my use case I can probably do some approximations.

Which is why I'm lifting the idea about a SetMaxSize method that could be adjust while running to handle "low on memory" situations.

item == nil, is it true for expired cache.Get("user:4") too?

item == nil, is it true for expired cache.Get("user:4") too?

  item := cache.Get("user:4")
  if item == nil {
    //handle
  } else {
    user := item.Value().(*User)
  }

i would like to check if item is empty. how do i check that? will expired item be emptied?

Fetch() is not atomic

If I call Fetch() simultaneously from multiple goroutines and the fetch func is rather slow, then it keeps being called until one of the calls returns.
So, it looks like Fetch() is not thread-safe

ability to set an item that never expires

Unless I'm missing something, the current implementation requires that all items have a finite TTL. That's fine for many use cases, but if I'm caching the results of some expensive computation that won't change over time for any given key, then I really wouldn't ever want values to be evicted (or recomputed by Fetch) simply because they're old— I would only want LRU entries to be evicted due to the cache being too full. I'd like to be able to specify a zero or negative time.Duration to mean "this doesn't expire."

Bug Report: v3.0.0 gc may failed

Thanks for this great software! In my usage scenario, i observed that cache size keep going up and dropped equals 0 for a long time, this eventually result in oom. I think there maybe some bug here.

Here is a test code:

func TestPrune(t *testing.T) {
	maxSize := int64(5000)
	cache := ccache.New(ccache.Configure[string]().MaxSize(maxSize))
	epoch := 0
	for {
		epoch += 1
		expired := make([]string, 0)
		for i := 0; i < 50; i += 1 {
			key := strconv.FormatInt(rand.Int63n(maxSize*20), 10)
			item := cache.Get(key)
			if item == nil || item.TTL() > 1*time.Minute {
				expired = append(expired, key)
			}
		}
		for _, key := range expired {
			cache.Set(key, key, 5*time.Minute)
		}
		if epoch%5000 == 0 {
			size := cache.GetSize()
			dropped := cache.GetDropped()
			fmt.Printf("size=%d dropped=%d\n", size, dropped)
			time.Sleep(100 * time.Millisecond)
		}
	}
}

When running this code, the size will greater than 5000, and dropped keep equals 0.

=== RUN   TestPrune
size=30270 dropped=171439
size=48587 dropped=149851
size=7654 dropped=225531
size=42967 dropped=146521
size=28343 dropped=162295
size=93191 dropped=195
size=98497 dropped=0
size=93155 dropped=15731
size=98476 dropped=0
size=98913 dropped=0
size=98936 dropped=0
size=98965 dropped=0

This may be the reason:

  • a key is updated, the existing oldItem is send to deletables but not consumed
  • gc() triggered, oldItem removed by List.Remove(), oldItem.node.Prev set to nil
  • deletables consumed, oldItem removed again by List.Remove(), l.Tail set to nil (oldItem.node.Prev)
  • the upcoming gc() will be skipped, because node = c.list.Tail = nil, size keep going up and dropped keep equals 0
  • if other items in deletables not been gc, l.Tail maybe set to non-nil, gc() may recovered, but it will fail in the furture with the same reason
func (c *Cache[T]) gc() int {
   dropped := 0
   node := c.list.Tail

   itemsToPrune := int64(c.itemsToPrune)
   if min := c.size - c.maxSize; min > itemsToPrune {
      itemsToPrune = min
   }

   for i := int64(0); i < itemsToPrune; i++ {
      if node == nil { // gc is skipped if c.list.Tail = nil
         return dropped
      }
      prev := node.Prev
      item := node.Value
      if c.tracking == false || atomic.LoadInt32(&item.refCount) == 0 {
         c.bucket(item.key).delete(item.key)
         c.size -= item.size
         c.list.Remove(node)
         if c.onDelete != nil {
            c.onDelete(item)
         }
         dropped += 1
         item.promotions = -2
      }
      node = prev
   }
   return dropped
}

func (l *List[T]) Remove(node *Node[T]) {
   next := node.Next
   prev := node.Prev

   if next == nil {
      l.Tail = node.Prev // second Remove will make Tail = nil
   } else {
      next.Prev = prev
   }

   if prev == nil {
      l.Head = node.Next
   } else {
      prev.Next = next
   }
   node.Next = nil
   node.Prev = nil
}

Nice software. It's great!

I'm using it but can you help save memory / garbage collection time by looking into this cache below?
https://github.com/coocood/freecache

I hope it can be optimally used like freecache without a lot of GC happening. How does ccache compared with it? I will be using ccache frequently.

tag new version

Hi, at the moment doing go get github.com/karlseguin/ccache does not give us OnDelete method because latest tag is quite old.

Could you consider tagging new version if current master is stable enough

Data race in cache.Clear

//this isn't thread safe. It's meant to be called from non-concurrent tests

But even in non-concurrents tests, go test -race reports a race:

WARNING: DATA RACE
Write at 0x00c000162800 by goroutine 23:
  github.com/karlseguin/ccache/v2.(*LayeredCache).Clear()
      /Users/bep/go/pkg/mod/github.com/karlseguin/ccache/[email protected]/layeredcache.go:172 +0xb4
  github.com/gohugoio/hugo/cache/memcache.(*Cache).Clear()
      /Users/bep/dev/go/gohugoio/hugo/cache/memcache/memcache.go:211 +0x63c
  github.com/gohugoio/hugo/cache/memcache.TestCache()
      /Users/bep/dev/go/gohugoio/hugo/cache/memcache/memcache_test.go:47 +0x52b
  testing.tRunner()
      /Users/bep/dev/go/dump/go/src/testing/testing.go:1109 +0x202

Previous write at 0x00c000162800 by goroutine 31:
  github.com/karlseguin/ccache/v2.(*LayeredCache).doPromote()
      /Users/bep/go/pkg/mod/github.com/karlseguin/ccache/[email protected]/layeredcache.go:269 +0x50e
  github.com/karlseguin/ccache/v2.(*LayeredCache).worker()
      /Users/bep/go/pkg/mod/github.com/karlseguin/ccache/[email protected]/layeredcache.go:229 +0x8d3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.