Giter Site home page Giter Site logo

moka's People

Contributors

06chaynes avatar aspect avatar barkanido avatar clslaid avatar fossabot avatar lmjw avatar messense avatar milo123459 avatar nyurik avatar paolobarbolini avatar peter-scholtens avatar saethlin avatar swatinem avatar tatsuya6502 avatar tinou98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

moka's Issues

Memory leak

Hi team -- thanks for making moka available to the world!

I'm having some difficulty using it, for me the cache simply grow till it run out of memory.

Below is a minimum reproducible example. It grows to a few gigs of memory usage after a few minutes. Am I not incantating the cache properly? (thanks for your patience i'm a Rust n00b)

use moka::unsync::{Cache, CacheBuilder};
use ahash::RandomState;

use std::time::Duration;
use rand::Rng;

const CACHE_SIZE: u64 = 1_000_000;
const TTI: Duration = Duration::from_millis(10_000);

fn main() {
    let mut cache: Cache<(u64, u64, u64), (), RandomState> = CacheBuilder::new(CACHE_SIZE as usize)
        .initial_capacity(CACHE_SIZE as usize)
        .time_to_idle(TTI)
        .build_with_hasher(RandomState::default());

    let mut rng = rand::thread_rng();
    loop {
        let key = (rng.gen::<u64>(), rng.gen::<u64>(), rng.gen::<u64>());
        cache.insert(key, ());
    }
}

Integer overflow detected in a unit test for the `FrequencySketch`

Integer overflow detected in a unit test.

https://github.com/moka-rs/moka/runs/5951855180

---- common::frequency_sketch::tests::heavy_hitters stdout ----
thread 'common::frequency_sketch::tests::heavy_hitters' panicked at 'attempt to add with overflow',
src/common/frequency_sketch.rs:183:9

This case was added 17 months ago via 42fb4c8 and this is first time to fail (as long as I can remember).

Perhaps we will need to replace some + with wrapping_add.

Support Linux on MIPS and ARMv5TE

Support the following 32-bit Linux platforms if possible:

Currently Moka will not compile by the following reasons:

  • Moka uses std::atomic::AtomicU64 mainly as the expiration clock for entries, but AtomicU64 is not available on mipsel.
    • "PowerPC and MIPS platforms with 32-bit pointers do not have AtomicU64 or AtomicI64 types" (doc)
    • It seems AtomicU64 is not available on ARMv5TE too.
  • quanta will not compile too because crossbeam-utils does not provide atomic::AtomicCell::fetch_add method on these platforms: (metrics-rs/quanta#54)

Probably we will replace all usages of AtomicU64 with RwLock<Option<std::time::Instant>> on these platforms?

[BUG] `usize` overflow on big cache capacity

Hello,

First of all thank you for the very nice work !

There seems to be a multiply overflow that could be handled at the line referenced below.

let skt_capacity = usize::max(max_capacity * 32, 100);

This could be done either by returning an error type, or by maxing the value of the cache capacity to usize::max_value() / 32

Flaky test `cht::segment::tests::drop_many_values` under Cargo Tarpaulin

Test cht::segment::tests::drop_many_values is failing occasionally when executed under Cargo Tarpaulin. Perhaps the timings of dropping invalidated cache keys are affected by Cargo Tarpaulin in a non-deterministic way (?)

https://app.circleci.com/pipelines/github/moka-rs/moka/520/workflows/ad175a12-9d63-4a20-b10d-d3dad9b78a81/jobs/506

#!/bin/bash -eo pipefail
docker run --security-opt seccomp=unconfined -v "${PWD}:/volume" \
  --env RUSTFLAGS='--cfg circleci' \
  xd009642/tarpaulin \
  cargo tarpaulin -v \
    --features 'sync, future, dash' \
    --ciserver circle-ci \
    --coveralls ${COVERALLS_TOKEN} \
    --timeout 600 \
|| true

...

failures:

---- cht::segment::tests::drop_many_values stdout ----
thread 'cht::segment::tests::drop_many_values' panicked at 'assertion failed: this_key_parent.was_dropped()', src/cht/segment.rs:1448:13
stack backtrace:
   0: rust_begin_unwind
             at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/std/src/panicking.rs:584:5
   1: core::panicking::panic_fmt
             at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/panicking.rs:143:14
   2: core::panicking::panic
             at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/panicking.rs:48:5
   3: moka::cht::segment::tests::drop_many_values
             at ./src/cht/segment.rs:1448:13
   4: moka::cht::segment::tests::drop_many_values::{{closure}}
             at ./src/cht/segment.rs:1364:5
   5: core::ops::function::FnOnce::call_once
             at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/ops/function.rs:227:5
   6: core::ops::function::FnOnce::call_once
             at /rustc/fe5b13d681f25ee6474be29d748c65adcd91f69e/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.


failures:
    cht::segment::tests::drop_many_values

test result: FAILED. 112 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 45.93s

Ability to invalidate the entire cache?

A use case has come up where I'd like to invalidate the entire cache (or perhaps invalidate based on a predicate on the values). I'm not seeing any way to do this at the moment. I am using the future based cache.

Implications of `Arc<K>: Borrow<Q>`

Discussed in #163

Originally posted by JayKickliter July 9, 2022
For my particular use case I'm finding moka::sync::Cache requirement of Arc<K>: Borrow<Q> very restrictive, as my keytype is somewhat expensive to create. Is this an absolute requirement for this kind of cache or an implementation detail?

Expected behavior

Using HashMap as an example, I'd expect to be able to querey using a &[u8] when the key type is Vec<u8>.

    use std::collections::HashMap;
    let mut map: HashMap<Vec<u8>, ()> = HashMap::new();
    map.insert(vec![1_u8], ());
    map.contains_key([1_u8].as_slice());

Failing example, simplified

    use moka::sync::Cache;
    let cache: Cache<Vec<u8>, ()> = Cache::new(1);
    cache.insert(vec![1_u8], ());
    cache.contains_key([1_u8].as_slice());
error[E0277]: the trait bound `Arc<Vec<u8>>: Borrow<[u8]>` is not satisfied
   --> src/main.rs:5:11
    |
5   |     cache.contains_key([1_u8].as_slice());
    |           ^^^^^^^^^^^^ the trait `Borrow<[u8]>` is not implemented for `Arc<Vec<u8>>`
    |
    = help: the trait `Borrow<T>` is implemented for `Arc<T>`
note: required by a bound in `moka::sync::Cache::<K, V, S>::contains_key`
   --> /Users/jay/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-0.9.0/src/sync/cache.rs:880:17
    |
880 |         Arc<K>: Borrow<Q>,
    |                 ^^^^^^^^^ required by this bound in `moka::sync::Cache::<K, V, S>::contains_key`

EDIT: I now wonder if this could be considered a bug after noticing the following.

This also seems to imply that the docs contains_key are incorrect:

The key may be any borrowed form of the cache’s key type,

Help with size based eviction that weighs with bytes

For my project, I am wanting to add some cache size limits that are set automatically based on system memory. Are there any examples of someone doing this?

I'm not sure how to accurately calculate the size in bytes of adding an item. It's going to be something like size_of(key) + size_of(value) + overhead. What is the overhead? And is this the right idea?

Also my Cache's Values are json_serde::Value and I'm not sure how to calculate their size in memory. Maybe that's a better question for the serde issue tracker though.

I know that checking the size of things on the heap isn't cheap, and that the weigher is more designed for unit-less weights. But I'm just wanting to keep my programming from OOMing if the cache is full of all large values.

CI: Run Linux AArch64 tests on real hardware

Currently, we are using QEMU's user mode emulation to run tests for Linux AArch64 target on x86_64 host, but it does not seem to emulate weak memory ordering in such an environment (crossbeam-rs/crossbeam#837).

Moka's lock-free concurrent hash table utilizes weak memory ordering whenever possible, so it would be better to run CI on real AArch64 hardware.

Cirrus CI provides Linux AArch64 containers.

Enable unit tests for `moka::cht::*` modules

This is a follow up issue for PR #77. It the PR, we imported (copied) some of the source files from moka-cht crate, but there are some remaining action items:

  • Update and re-enable unit tests for moka::cht::* modules.
  • Update the doc.

Docs error: the example about get_or_try_insert_with

error[E0433]: failed to resolve: use of undeclared crate or module `futures_util`
  --> src/main.rs:48:5
   |
48 |     futures_util::future::join_all(tasks).await;
   |     ^^^^^^^^^^^^ use of undeclared crate or module `futures_util`

For more information about this error, try `rustc --explain E0433`.
error: could not compile `hello-rust` due to previous error

May need to add comments about futures-util dependencies?

Segmentation faults in moka-cht under heavy workloads on a many-core machine

I have seen segmentation faults a few times when I am running mokabench on Moka v0.5.1. It seems it is randomly happening while get_or_insert_with method is heavily called concurrently from many threads.

+ ./target/release/mokabench --enable-invalidate-entries-if --enable-insert-once
Cache, Max Capacity, Clients, Inserts, Reads, Hit Rate, Duration Secs
Moka Unsync Cache, 100000, -, 14696832, 31104534, 52.750, 8.575
Moka Cache, 100000, 16, 15550290, 31954711, 51.336, 17.365
Moka Cache, 100000, 24, 15543954, 31948375, 51.347, 17.743
Moka Cache, 100000, 32, 15527876, 31932297, 51.373, 17.877
./run-tests.sh: line 36: 21740 Segmentation fault      (core dumped) ./target/release/mokabench --enable-invalidate-entries-if --enable-insert-once

I am using Amazon EC2 for running mokabench. After spending few days, I found it is related to the version of crossbeam-epoch and number of CPU cores.

Segfaults? Moka cht/moka-cht crossbeam-epoch EC2 Instance Type Arch vCPUs OS
Yes v0.5.1 moka-cht v0.5.0 v0.9.5 c5.9xlarge x86_64 36 Amazon Linux 2
No v0.5.1 cht v0.4.1 v0.8.2 c5.9xlarge x86_64 36 Amazon Linux 2
No v0.5.1 moka-cht v0.5.0 v0.9.5 c5.4xlarge x86_64 16 Amazon Linux 2

crossbeam-epoch is used by moka-cht, the concurrent hash table use by Moka.

I examined stack traces from core dumps and found there are two patterns. I have not identified the root cause yet. Perhaps a crossbeam_epoch::Owned<T>, which is very similar to Box<T>, stored in moka-cht became a dangling pointer by some reason?

Pattern 1: At Arc::ne() (Click to expand)
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000055cd7249862e in <alloc::sync::Arc<T> as alloc::sync::ArcEqIdent<T>>::ne ()
    at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/sync.rs:2095
2095	/rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/sync.rs: No such file or directory.
[Current thread is 1 (Thread 0x7fe61d1e8700 (LWP 7009))]
warning: Missing auto-load script at offset 0 in section .debug_gdb_scripts
of file /data/core-dumps/mokabench-copy/target/release/mokabench.
Use `info auto-load python-scripts [REGEXP]' to list them.
Missing separate debuginfos, use: debuginfo-install glibc-2.26-48.amzn2.x86_64 libgcc-7.3.1-13.amzn2.x86_64
(gdb) bt
#0  0x000055cd7249862e in <alloc::sync::Arc<T> as alloc::sync::ArcEqIdent<T>>::ne ()
    at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/sync.rs:2095
#1  <alloc::sync::Arc<T> as core::cmp::PartialEq>::ne () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/sync.rs:2141
#2  core::cmp::impls::<impl core::cmp::PartialEq<&B> for &A>::ne () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/cmp.rs:1356
#3  moka_cht::map::bucket::BucketArray<K,V>::insert_or_modify::{{closure}} ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/map/bucket.rs:255
#4  moka_cht::map::bucket::BucketArray<K,V>::probe_loop ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/map/bucket.rs:367
#5  moka_cht::map::bucket::BucketArray<K,V>::insert_or_modify ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/map/bucket.rs:248
#6  0x000055cd72476961 in moka_cht::map::bucket_array_ref::BucketArrayRef<K,V,S>::insert_with_or_modify_entry_and ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/map/bucket_array_ref.rs:191
#7  0x000055cd7248d19a in moka_cht::segment::map::HashMap<K,V,S>::insert_with_or_modify_entry_and ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/segment/map.rs:933
#8  moka_cht::segment::map::HashMap<K,V,S>::insert_with_or_modify ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/segment/map.rs:798
#9  moka::sync::value_initializer::ValueInitializer<K,V,S>::try_insert_waiter ()
    at /home/ec2-user/.cargo/git/checkouts/moka-6ea430727379b61e/1bf28ed/src/sync/value_initializer.rs:108
#10 0x000055cd7248cdf8 in moka::sync::value_initializer::ValueInitializer<K,V,S>::init_or_read ()
    at /home/ec2-user/.cargo/git/checkouts/moka-6ea430727379b61e/1bf28ed/src/sync/value_initializer.rs:42
#11 0x000055cd72492f74 in moka::sync::cache::Cache<K,V,S>::get_or_insert_with_hash_and_fun ()
    at /home/ec2-user/.cargo/git/checkouts/moka-6ea430727379b61e/1bf28ed/src/sync/cache.rs:277
#12 moka::sync::cache::Cache<K,V,S>::get_or_insert_with () at /home/ec2-user/.cargo/git/checkouts/moka-6ea430727379b61e/1bf28ed/src/sync/cache.rs:264
#13 0x000055cd7248f90d in mokabench::cache::sync_cache::SyncCache::get_or_insert_with () at src/cache/sync_cache.rs:43
#14 <mokabench::cache::sync_cache::SyncCache as mokabench::cache::CacheSet<mokabench::parser::ArcTraceEntry>>::get_or_insert_once ()
    at src/cache/sync_cache.rs:79
#15 0x000055cd7246eb87 in <mokabench::cache::sync_cache::SharedSyncCache as mokabench::cache::CacheSet<mokabench::parser::ArcTraceEntry>>::get_or_insert_once
    () at src/cache/sync_cache.rs:125
#16 mokabench::process_commands () at src/lib.rs:107
...
Pattern 2: At atomic_sub() in Arc::drop() (Click to expand)
Program terminated with signal SIGSEGV, Segmentation fault.
#0  core::sync::atomic::atomic_sub () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/sync/atomic.rs:2401
2401	/rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/sync/atomic.rs: No such file or directory.
[Current thread is 1 (Thread 0x7f6e0f9b2900 (LWP 32108))]
Missing separate debuginfos, use: debuginfo-install glibc-2.26-48.amzn2.x86_64 libgcc-7.3.1-13.amzn2.x86_64
(gdb) bt
#0  core::sync::atomic::atomic_sub () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/sync/atomic.rs:2401
#1  core::sync::atomic::AtomicUsize::fetch_sub () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/sync/atomic.rs:1769
#2  <alloc::sync::Arc<T> as core::ops::drop::Drop>::drop () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/sync.rs:1558
#3  core::ptr::drop_in_place<alloc::sync::Arc<usize>> () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ptr/mod.rs:192
#4  core::ptr::drop_in_place<moka_cht::map::bucket::Bucket<alloc::sync::Arc<usize>,alloc::sync::Arc<async_lock::rwlock::RwLock<core::option::Option<core::result::Result<alloc::sync::Arc<alloc::boxed::Box<[u8]>>,alloc::sync::Arc<alloc::boxed::Box<dyn std::error::Error+core::marker::Send+core::marker::Sync>>>>>>>> () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ptr/mod.rs:192
#5  core::ptr::drop_in_place<alloc::boxed::Box<moka_cht::map::bucket::Bucket<alloc::sync::Arc<usize>,alloc::sync::Arc<async_lock::rwlock::RwLock<core::option::Option<core::result::Result<alloc::sync::Arc<alloc::boxed::Box<[u8]>>,alloc::sync::Arc<alloc::boxed::Box<dyn std::error::Error+core::marker::Send+core::marker::Sync>>>>>>>>> () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ptr/mod.rs:192
#6  core::mem::drop () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/mem/mod.rs:889
#7  <T as crossbeam_epoch::atomic::Pointable>::drop ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.9.5/src/atomic.rs:212
#8  <crossbeam_epoch::atomic::Owned<T> as core::ops::drop::Drop>::drop ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.9.5/src/atomic.rs:1087
#9  core::ptr::drop_in_place<crossbeam_epoch::atomic::Owned<moka_cht::map::bucket::Bucket<alloc::sync::Arc<usize>,alloc::sync::Arc<async_lock::rwlock::RwLock<core::option::Option<core::result::Result<alloc::sync::Arc<alloc::boxed::Box<[u8]>>,alloc::sync::Arc<alloc::boxed::Box<dyn std::error::Error+core::marker::Send+core::marker::Sync>>>>>>>>> () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ptr/mod.rs:192
#10 core::mem::drop () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/mem/mod.rs:889
#11 moka_cht::map::bucket::defer_acquire_destroy::{{closure}} ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/map/bucket.rs:684
#12 crossbeam_epoch::guard::Guard::defer_unchecked ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.9.5/src/guard.rs:195
#13 moka_cht::map::bucket::defer_acquire_destroy () at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/map/bucket.rs:682
#14 <moka_cht::segment::map::HashMap<K,V,S> as core::ops::drop::Drop>::drop ()
    at /home/ec2-user/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-cht-0.5.0/src/segment/map.rs:1032
#15 0x000055db206daf73 in core::ptr::drop_in_place<moka_cht::segment::map::HashMap<alloc::sync::Arc<usize>,alloc::sync::Arc<async_lock::rwlock::RwLock<core::option::Option<core::result::Result<alloc::sync::Arc<alloc::boxed::Box<[u8]>>,alloc::sync::Arc<alloc::boxed::Box<dyn std::error::Error+core::marker::Send+core::marker::Sync>>>>>>>> () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ptr/mod.rs:192
#16 core::ptr::drop_in_place<moka::future::value_initializer::ValueInitializer<usize,alloc::sync::Arc<alloc::boxed::Box<[u8]>>,std::collections::hash::map::RandomState>> () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ptr/mod.rs:192
#17 alloc::sync::Arc<T>::drop_slow () at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/sync.rs:1051
#18 0x000055db206ea837 in mokabench::run_multi_tasks::{{closure}} () at /home/ec2-user/mokabench/src/lib.rs:314

Memory leak after `moka::sync::Cache` is dropped

After a sync::Cache is dropped, it seems like some memory is not freed.

Reproduce:

use moka::sync::Cache;

fn main() {
    let cache: Cache<u32, u32> = Cache::new(1000);
    for i in 0..1000u32 {
        cache.insert(i, i);
    }
    drop(cache);
}

and run it with Valgrind:

$ valgrind --leak-check=full ./target/release/moka-memleak
# ...
==910805== 80,000 (48,000 direct, 32,000 indirect) bytes in 1,000 blocks are definitely lost in loss record 29 of 29
==910805==    at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==910805==    by 0x118F5D: moka::sync_base::base_cache::BaseCache<K,V,S>::do_insert_with_hash::{{closure}} (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805==    by 0x118C52: moka::cht::map::bucket::InsertOrModifyState<K,V,F>::into_insert_bucket (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805==    by 0x116087: moka::cht::map::bucket::BucketArray<K,V>::insert_or_modify (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805==    by 0x12CC7E: moka::cht::map::bucket_array_ref::BucketArrayRef<K,V,S>::insert_with_or_modify_entry_and (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805==    by 0x12969B: moka_memleak::main (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805==    by 0x1362A2: std::sys_common::backtrace::__rust_begin_short_backtrace (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805==    by 0x136DB8: _ZN3std2rt10lang_start28_$u7b$$u7b$closure$u7d$$u7d$17hcd59787f08a77695E.llvm.15037216701979191285 (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805==    by 0x1568C5: call_once<(), (dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (function.rs:280)
==910805==    by 0x1568C5: do_call<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panicking.rs:492)
==910805==    by 0x1568C5: try<i32, &(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe)> (panicking.rs:456)
==910805==    by 0x1568C5: catch_unwind<&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe), i32> (panic.rs:137)
==910805==    by 0x1568C5: {closure#2} (rt.rs:128)
==910805==    by 0x1568C5: do_call<std::rt::lang_start_internal::{closure_env#2}, isize> (panicking.rs:492)
==910805==    by 0x1568C5: try<isize, std::rt::lang_start_internal::{closure_env#2}> (panicking.rs:456)
==910805==    by 0x1568C5: catch_unwind<std::rt::lang_start_internal::{closure_env#2}, isize> (panic.rs:137)
==910805==    by 0x1568C5: std::rt::lang_start_internal (rt.rs:128)
==910805==    by 0x12A741: main (in /home/ubuntu/Projects/moka-memleak/target/release/moka-memleak)
==910805== 
==910805== LEAK SUMMARY:
==910805==    definitely lost: 48,000 bytes in 1,000 blocks
==910805==    indirectly lost: 32,000 bytes in 1,000 blocks
==910805==      possibly lost: 2,388 bytes in 9 blocks
==910805==    still reachable: 14,432 bytes in 73 blocks
==910805==         suppressed: 0 bytes in 0 blocks
==910805== Reachable blocks (those to which a pointer was found) are not shown.
==910805== To see them, rerun with: --leak-check=full --show-leak-kinds=all

Panic causes "thread panicked at 'internal error: entered unreachable code'"

Hi, thank you for this very useful library! However, I've found a way to enter a supposedly unreachable state.

Here's the relevant part of the backtrace:

thread 'main' panicked at 'internal error: entered unreachable code', /home/user/.cargo/git/checkouts/moka-6ea430727379b61e/19b6925/src/future/value_initializer.rs:101:29
stack backtrace:
   0: rust_begin_unwind
             at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/std/src/panicking.rs:515:5
   1: core::panicking::panic_fmt
             at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/panicking.rs:92:14
   2: core::panicking::panic
             at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/panicking.rs:50:5
   3: moka::future::value_initializer::ValueInitializer<K,V,S>::try_init_or_read::{{closure}}
             at /home/user/.cargo/git/checkouts/moka-6ea430727379b61e/19b6925/src/future/value_initializer.rs:101:29
   4: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/future/mod.rs:80:19
   5: moka::future::cache::Cache<K,V,S>::get_or_try_insert_with_hash_and_fun::{{closure}}
             at /home/user/.cargo/git/checkouts/moka-6ea430727379b61e/19b6925/src/future/cache.rs:640:15
   6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/future/mod.rs:80:19
   7: moka::future::cache::Cache<K,V,S>::get_or_try_insert_with::{{closure}}
             at /home/user/.cargo/git/checkouts/moka-6ea430727379b61e/19b6925/src/future/cache.rs:445:9

This issue seems to occur when two threads attempt to fetch the same key and the thread on which the value initialization was started panics. I've built a simple program to demonstrate this issue:

use tokio;
use std::sync::Arc;

#[tokio::main]
async fn main() {
    let cache: Arc<moka::future::Cache<i32, i32>> = Arc::new(moka::future::Cache::new(16));
    let semaphore = Arc::new(tokio::sync::Semaphore::new(0));
    {
        let cache_ref = cache.clone();
        let semaphore_ref = semaphore.clone();
        tokio::task::spawn(async move {
            cache_ref.get_or_try_insert_with::<_, ()>(1, async move {
                semaphore_ref.add_permits(1);
                tokio::time::sleep(tokio::time::Duration::from_millis(1000));
                panic!("Panic during get_or_try_insert_with");
                Ok(10)
            }).await;
        });
    }
    semaphore.acquire().await;
    cache.get_or_try_insert_with::<_, ()>(1, async move {
        println!("Async block in second get_or_try_insert_with called");
        Ok(5)
    }).await;
}

I think the unreachable code is being hit because the write lock obtained at line 71 of value_initializer.rs gets dropped without any value being written (here or here).

I see two possible options for handling this issue:

  1. unreachable() can be replaced with a panic with a more helpful message.
  2. Perhaps the existing waiter should be removed and a new waiter should be created and started using the future passed in the second call?

Add new api similar to `try_get_with`, but return `Option` rather than `Result`?

Thanks for the great work on the library!

I am using the library in one of the project that I am working on and I feel some apis similar to try_get_with, but instead of the function return Result<V,E>, having Function returns Option<V> would be more "ergonomics".

Just want to know if you are have any objections of adding such APIs. Maybe something like "optional_get_with"? Happy to contribute if you are okay with it. Thanks

CI: Fix the test coverage job

The test coverage job is failing:

Circle CI Job 374

memory allocation of 8589934592 bytes failed
Apr 02 08:42:10.963 ERROR cargo_tarpaulin: Failed to get test coverage! 
    Error: Failed to run tests: Attempting to handle tarpaulin being signaled
Error: "Failed to get test coverage!
    Error: Failed to run tests: Attempting to handle tarpaulin being signaled"

Too long with no output (exceeded 10m0s): context deadline exceeded

Also, need to update the job definition:

This job is either using an Ubuntu 14.04 image, an Ubuntu 16.04 image, or the default Ubuntu 14.04-based image by specifying machine:true in config. These images (including the default image) will be removed on May 31, 2022. Upgrade your image from Ubuntu 14.04 or 16.04.

feat: Notifications on eviction, etc.

Summary

Add support for eviction listener.

  • This feature was requested by users:
  • and implemented by:

Remaining Tasks

Due to time constraints, we could not implement the following features. They will be covered by future releases.

  • Support DeliveryMode::Immediate in future::Cache.
  • Send notifications without using thread pool.

`Cache` missing `Debug` impl

The Cache types are all missing Debug impls, which is annoying. I'm not sure what offhand such an impl should show, but perhaps some basic stats about the cache if that's easy to gather.

Feature Request: Get or Insert Future

The ability to get-or-insert atomically would be nice for preventing multiple requests for the same resource from being created to populate the cache. Otherwise, there's a race condition between get and insert where another task could attempt to get as well, and also see an empty record to populate.

Remove `atomic64` feature and use `crossbeam::atomic::AtomicCell<u64>` for all targets?

In order to support target platforms without AtomicU64, we added a Cargo feature called atomic64 to Moka v0.5.3 and put it into the default feature list (#38, #39).

  • If the feature is enabled, Moka keeps the old behavior (by using AtomicU64 as timestamps of cache entries).
  • Otherwise, it uses RwLock<Instant> so that it will compile for such targets.

The feature also controls whether quanta crate is used or not by Moka, because quanta v0.9.2 or earlier do not compile for such targets.

We might want to replace above with crossbeam::atomic::AtomicCell<u64> because it provides a similar functionality to AtomicU64 (when it is supported by the target) and RwLock<u64> (when AtomicU64 is not supported).

https://docs.rs/crossbeam/0.8.1/crossbeam/atomic/struct.AtomicCell.html

Operations on AtomicCells use atomic instructions whenever possible, and synchronize using global locks otherwise.

It will be more convenient as it switches implementations without having a Cargo feature like atomic64. It combines auto-generated Rust source and the build script to achieve auto-switching.

Also, the next version of quanta will support these targets (metrics-rs/quanta#55), so once it is released, we can always enable quanta.

[feature request] add `Cache::get_with_if(&self, key: &K, fallback: Fut, cond: Fn(&V) -> bool) -> V` method

I'm building a cache whose entries have their own standalone TTL. However moka supports global TTL and IDLE only. So I'm here to propose a new method Cache::get_with_if(&self, key: &K, fallback: Fut, cond: Fn(&V) -> bool) -> V

The function works like Cache::get_with(&self, key: &K, init: Fut) -> V, but when the key does not exist or cond(&V) is true, the init clojure will be called to initialize the value.

`future::Cache::get_or_(try_)insert_with()` can lead UB as accepting `init` future that is not `Send` or `'static`.

In Moka v0.5.0 and v0.5.1, the following code fragments for future::Cache will compile. However Moka should reject them as they can lead undefined behavior (UB) in safe Rust.

Example 1

use moka::future::Cache;
use std::rc::Rc;

#[tokio::main]
async fn main() {
    let cache: Cache<_, String> = Cache::new(100);

    // Rc is !Send.
    let data = Rc::new("zero".to_string());
    let data1 = Rc::clone(&data);

    cache
        .get_or_insert_with(0, async move {
            // A data race may occur. 
            // The async block can be executed by a different thread
            // but Rc's internal reference counters are not thread safe.
            data1.to_string()
        })
        .await;

    println!("{:?}", data);
}

Example 2

use moka::future::Cache;

#[tokio::main]
async fn main() {
    let cache: Cache<_, String> = Cache::new(100);

    let data = "zero".to_string();
    {
        // Not 'static.
        let data_ref = &data;

        cache
            .get_or_insert_with(0, async {
                // This may become a dangling pointer.
                // The async block can be executed by a different thread so
                // the captured reference `data_ref` may outlive its value.
                data_ref.to_string()
            })
            .await;
    }

    println!("{:?}", data);
}

So future::Cache's get_or_insert_with() and get_or_try_insert_with() need extra bounds Send and 'static on the init future.

Bug in `moka::future::Cache::invalidate_all`? Elements not being invalidated immediatelly.

Hi.

According to the documentation of invalidate_all:

pub fn invalidate_all(&self)
Discards all cached values.

This method returns immediately and a background thread will evict all the cached values inserted before the time when this method was called. It is guaranteed that the get method must not return these invalidated values even if they have not been evicted.

Like the invalidate method, this method does not clear the historic popularity estimator of keys so that it retains the client activities of trying to retrieve an item.

From this I surmised that, even though the actual removal of elements occurs in the background, they would be somehow marked as invalid, and as such a subsequent get would not see them.

However, this does not seem to be happening. Here's a small minimal example that exemplefies this

#[tokio::test]
async fn test_cache_invalidate() {
    let cache = Cache::new(1024 as u64);

    assert_eq!(cache.get(&0), None);
    cache.insert(0, 1).await;
    assert_eq!(cache.get(&0), Some(1));
    cache.invalidate_all();
    assert_eq!(cache.get(&0), None);
}

This fails in the last line (the get returns Some(1))

Is this a bug or am I misreading the documentation?

[BUG] overflow when ttl is large

The following panics:

#[tokio::test]
async fn large_ttl() {
    let cache: moka::future::Cache<(), ()> = moka::future::CacheBuilder::new(usize::MAX)
        .time_to_live(Duration::MAX)
        .build();
    cache.insert((), ()).await;
    cache.sync();
}
panicked at 'overflow when adding duration to instant',
.cargo/registry/src/github.com-1ecc6299db9ec823/quanta-0.9.3/src/instant.rs:142:14

`Cache::get` method sometimes panics due to an integer overflow in `quanta::instant::Instant::now`

This issue was reported by @BrynCooke on May 10, 2022 UTC via #113 (comment)

The number of times needed to hit this is very low. Sometimes it happens within 10 invocations, sometimes 100.


I have been hitting this issue and am able to reproduce by running an integration test in a loop.

Looks like it is possibly related to this: metrics-rs/quanta#61

The stacktrace is as follows:

2022-05-10T11:16:52.903652Z ERROR integration_tests: panicked at 'attempt to add with overflow', /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/num/mod.rs:834:5


1: std::panicking::rust_panic_with_hook
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:702:17
   2: std::panicking::begin_panic_handler::{{closure}}
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:586:13
   3: std::sys_common::backtrace::__rust_end_short_backtrace
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/sys_common/backtrace.rs:138:18
   4: rust_begin_unwind
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/std/src/panicking.rs:584:5
   5: core::panicking::panic_fmt
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/panicking.rs:143:14
   6: core::panicking::panic
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/panicking.rs:48:5
   7: core::num::<impl u64>::next_power_of_two
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/num/uint_macros.rs:2188:13
   8: quanta::Calibration::adjust_cal_ratio
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/quanta-0.9.3/src/lib.rs:272:25
   9: quanta::Calibration::calibrate
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/quanta-0.9.3/src/lib.rs:226:13
  10: quanta::Clock::new::{{closure}}
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/quanta-0.9.3/src/lib.rs:307:17
  11: once_cell::sync::OnceCell<T>::get_or_init::{{closure}}
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/lib.rs:974:57
  12: once_cell::imp::OnceCell<T>::initialize::{{closure}}
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/imp_std.rs:95:19
  13: once_cell::imp::initialize_inner
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/imp_std.rs:171:31
  14: once_cell::imp::OnceCell<T>::initialize
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/imp_std.rs:93:9
  15: once_cell::sync::OnceCell<T>::get_or_try_init
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/lib.rs:1014:13
  16: once_cell::sync::OnceCell<T>::get_or_init
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/lib.rs:974:19
  17: quanta::Clock::new
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/quanta-0.9.3/src/lib.rs:305:31
  18: core::ops::function::FnOnce::call_once
             at /rustc/7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c/library/core/src/ops/function.rs:227:5
  19: once_cell::sync::OnceCell<T>::get_or_init::{{closure}}
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/lib.rs:974:57
  20: once_cell::imp::OnceCell<T>::initialize::{{closure}}
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/imp_std.rs:95:19
  21: once_cell::imp::initialize_inner
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/imp_std.rs:171:31
  22: once_cell::imp::OnceCell<T>::initialize
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/imp_std.rs:93:9
  23: once_cell::sync::OnceCell<T>::get_or_try_init
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/lib.rs:1014:13
  24: once_cell::sync::OnceCell<T>::get_or_init
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/once_cell-1.9.0/src/lib.rs:974:19
  25: quanta::get_now
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/quanta-0.9.3/src/lib.rs:532:9
  26: quanta::instant::Instant::now
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/quanta-0.9.3/src/instant.rs:25:9
  27: moka::common::time::Instant::now
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/moka-0.8.2/src/common/time.rs:24:17
  28: moka::sync::base_cache::Inner<K,V,S>::current_time_from_expiration_clock
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/moka-0.8.2/src/sync/base_cache.rs:760:13
  29: moka::sync::base_cache::BaseCache<K,V,S>::get_with_hash::{{closure}}
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/moka-0.8.2/src/sync/base_cache.rs:170:23
  30: moka::cht::map::bucket_array_ref::BucketArrayRef<K,V,S>::get_key_value_and_then
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/moka-0.8.2/src/cht/map/bucket_array_ref.rs:48:30
  31: moka::cht::segment::HashMap<K,V,S>::get_key_value_and_then
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/moka-0.8.2/src/cht/segment.rs:306:9
  32: moka::sync::base_cache::Inner<K,V,S>::get_key_value_and_then
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/moka-0.8.2/src/sync/base_cache.rs:630:9
  33: moka::sync::base_cache::BaseCache<K,V,S>::get_with_hash
             at /home/bryn/.asdf/installs/rust/1.60.0/registry/src/github.com-1ecc6299db9ec823/moka-0.8.2/src/sync/base_cache.rs:167:27
  34: moka::sync::cache::Cache<K,V,S>::get

The number of times needed to hit this is very low. Sometimes it happens within 10 invocations, sometimes 100.

`<Cache as Clone>` has overly-restrictive trait bounds

The various Cache types derive Clone, which means they require where K: Clone, V: Clone, S: Clone. This is overly restrictive. Since the underlying BaseCache on each type has no trait bounds on their Clone impls, the Caches don't need them either.

Besides being annoying, this also means that the cache key needs to be cloneable in order to clone the cache, even though this is not otherwise a requirement for cache keys.

Iterating cache

Hi @tatsuya6502 ,

I'm using moka cache (moka::future)...

use moka::future::{
    Cache,
    CacheBuilder,
};
...
lazy_static!{
    static ref WEB_CACHE: ResultCache =  CacheBuilder::new(10_000) // Max 10,000 elements
            .time_to_live(Duration::from_secs(15 * 60)) // Time to live (TTL): 15 minutes
            .time_to_idle(Duration::from_secs( 5 * 60)) // Time to idle (TTI):  5 minutes
            .build();  // Create the cache.
}

It is working well...

What is the best way of iterating through the elements in the list ?

I've looked through the code trying to find a way.
A "keys(&self) -> Vec" method would work..
Then I can iterate through the normal get...
Thanks...
JR

Conditional/Delayed Eviction

Hi! First, thanks so much for building moka! I recently encountered a situation where, when storing large cached values in Arc, the cache would evict an entry while it was still shared, so getting it again would result in a cache miss, despite that object still being alive elsewhere. A rough idea I had for countering this is to allow setting a conditional eviction callback in the cache builder with the signature Fn(&K, &T) -> bool. If an entry is set for eviction, the callback is invoked, and if false is returned that entry is kept in the cache. (In the common Arc case, this would be |_, v| 1 < Arc::strong_count(v)). Entries that skip eviction could be added to a separate list that is revisited each time an item is evicted from the cache, with the potential to make it adaptive (ex. items that are skipped repeatedly aren't checked with every iteration, saving the potential overhead for long-lived entries).

This is all a very rough sketch, but it could be helpful in high-performance scenarios.

Feature request: Add an `eviction_callback` parameter to the cache builder

Hey there, first and foremost thanks for moka!

I currently have this use case where I have other data structures I need to keep in sync with a moka cache. One way I though of doing this is the cache builder could support a new eviction_callback method which takes a closure that is called every time there is a cache eviction, with ideally both the key and value as parameters to the closure. Do you think that's feasible and if so, is this something you'd be interested in this being added to moka?

I envision something like:

pub fn eviction_callback<F>(&self, predicate: F)
where
    F: Fn(&K, &V) + Send + Sync + 'static,

or

pub fn eviction_callback<F>(&self, predicate: F)
where
    F: Fn(K, &V) + Send + Sync + 'static,
    K: Copy

Is it worth to modify the key of `get_with` as a `&K` rather than `K`?

Again, thanks for the great library!

Current the get_with signature requires me to pass the Key by value, rather than &K, where as the get method allows me to pass key by reference.

It is definitely reasonable that the function needs to have key by value as it might need to insert into cache if result is not present. On the other hand, it feels a bit less efficient if I know my access patterns are most times will have cache hit, only sometimes we need to insert.

Using get_with with K as value means I will need to explictly clone the key everytime I call get_with, even though I know the value is in the cache. This sounds like a micro optimization but feels there might be some room to make it better?

Is it okay to have get_with takes &K as input, only clone it when necessary? (But this would be a breaking change, so maybe some new APIs? But I am not sure if it worth the effort to make a small change like this :P ) Something I can think of is make K cloneable and only clone when needed? Of course maybe I am missing some context here. But happy to know if this is not possible. Anyway, it is a very minor thing. :)

Thanks!

high resource usage in the housekeeper

hi, we're using moka in a query cache in the Apollo router, and we are seeing high CPU usage in the housekeeper code, as shown in that flamegraph of a benchmark that uses one core and does not insert or get anything from the moka cache(the large tower on the left is the housekeeper):
flamegraph

With 4 cores, It is much lower, but still 2.45% of sampled time, for a cache that is doing nothing:
flamegraph

do you have an idea why it would cost that much to run the housekeeper?

feat: API stabilization

Goals

  • Smaller core cache API
  • Prefer shorter names for common methods

Examples

  • get_or_insert_with(K, F)get_with(K, F)
  • get_or_try_insert_with(K, F)try_get_with(K, F)
  • blocking_insert(K, V)blocking().insert(K, V)
  • time_to_live()policy().time_to_live()

cache line optimized CountMin4

The original frequency sketch in Caffeine would distribute the counters uniformly across the table. This had the benefit of giving the greatest opportunity for avoiding hash collisions, which could increase the error rate for popularity estimates. However this also meant that each counter was at an unpredictable location and very likely to cause a hardware cache miss. This was somewhat mitigated due to the maintenance performing a batch of work, so temporal locality for popular items may be helpful.

The improved sketch uniformly selects a 64 byte block from which all of an item's counters reside. This is the size of the L1 cache line so only one memory access is needed. The counters are selected from 16 byte segments using a secondary hash. In my benchmarks this improves the frequency estimation and increment throughput by up to 2.5x. The eviction throughput is 17% faster, but the get/put is unchanged since that the maintenance is non-blocking.

Java does not yet support using SIMD instructions (preview), but that could be used for an additional speed. A .NET developer ported an earlier prototype where he observed the operation times were cut in half.

This is just for fun, feel free to close if not interested. It is unlikely that users will observe users a speedup.

Some cht tests failed for 32-bit MIPS and ARMv5TE targets

For v0.7.2, we copied some source files of the concurrent hash table (cht) from moka-cht crate and removed moka-cht from the dependencies. When we enabled tests for the cht in Moka, we saw some tests failed for 32-bit MIPS and ARMv5TE targets. We disabled these failing tests and released v0.7.2. #86 (comment)

Since then, I have done some investigation, and came to the following conclusion:

  • Those failures were likely due to some incomplete parallelism supports in QEMU user space emulator.
    • This is confirmed by running the same tests on QEMU system emulator. (It was able to run the tests with no failures)
  • Therefore, we can safely ignore these failing tests.

Here is a summary of my investigation:

The issue

Some cht tests failed for 32-bit MIPS and ARMv5TE targets. These platforms are used by the some users of aliyundrive-webdav because SoC chips with these architectures are still common in home Wi-Fi routers and NAS models.

Failing tests

mips-unknown-linux-musl and mipsel-unknown-linux-musl targets

  1. concurrent_growth
  2. concurrent_growth_and_removal
  3. concurrent_insert_with_or_modify
  4. concurrent_insertion
  5. concurrent_insertion_and_removal
  6. concurrent_overlapped_insertion
  7. concurrent_overlapped_removal
  8. concurrent_removal

Error

memory allocation of 1052 bytes failed. signal: 6, SIGABRT: process abort signal.

armv5te-unknown-linux-musleabi target

  1. concurrent_overlapped_growth

Error

signal: 4, SIGILL: illegal instruction

Some backgrounds

  • Moka uses cross-rs/cross for testing with cross compilation.
  • For these platforms with failing tests, cross uses QEMU's user space emulator.
  • Unlike QEMU system emulator (virtual machines), the user space emulator is not very mature and has some limitations and issues:
    • From cross's README:
      • "Testing support (cross test) is more complicated. It relies on QEMU emulation, so testing may fail due to QEMU bugs rather than bugs in your crate."
    • From the "User space emulator" chapter of QEMU manual:
      • "Threading: ... Note that not all targets currently emulate atomic operations correctly."

Investigation: Running the tests on a VM instead of the user space emulator

To check whether or not those failures are caused by the issues with the user space emulator, I ran the same tests using the system emulator with a MIPS-based emulated CPU (qemu-system-mips -M malta).

OpenWrt project distributes firmware (embedded Linux) files that can run on QEMU system emulators. One of these firmware files is for MIPS architecture: "OpenWrt in QEMU MIPS"

Steps

  1. Ran tests using cross => It failed as expected.
  2. Started qemu-system-mips (a virtual machine) using the OpenWrt firmware.
  3. On the VM, copied the testing binary built at step 1.
  4. Ran the binary on the VM.

Result (MIPS)

I verified that all cht tests succeeded on the MIPS system emulator. I concluded the failures were due to some issues in the user space emulator and we can safely ignore them.

root@OpenWrt:~# uname -a
Linux OpenWrt 5.4.154 #0 SMP Sun Oct 24 09:01:35 2021 mips GNU/Linux

root@OpenWrt:~# ./moka-tests --test-threads 1 cht

running 18 tests
test cht::segment::map::tests::concurrent_growth ... ok
test cht::segment::map::tests::concurrent_growth_and_removal ... ok
test cht::segment::map::tests::concurrent_insert_with_or_modify ... ok
test cht::segment::map::tests::concurrent_insertion ... ok
test cht::segment::map::tests::concurrent_insertion_and_removal ... ok
test cht::segment::map::tests::concurrent_overlapped_growth ... ok
test cht::segment::map::tests::concurrent_overlapped_insertion ... ok
test cht::segment::map::tests::concurrent_overlapped_removal ... ok
test cht::segment::map::tests::concurrent_removal ... ok
test cht::segment::map::tests::drop_many_values ... ok
test cht::segment::map::tests::drop_many_values_concurrent ... ok
test cht::segment::map::tests::drop_value ... ok
test cht::segment::map::tests::growth ... ok
test cht::segment::map::tests::insert_with_or_modify ... ok
test cht::segment::map::tests::insertion ... ok
test cht::segment::map::tests::removal ... ok
test cht::segment::map::tests::remove_if ... ok
test cht::segment::map::tests::single_segment ... ok

test result: ok. 18 passed; 0 failed; 0 ignored; 0 measured; 43 filtered out; finished in 2145.54s

A note on ARMv5TE architecture

I could not do the same test for ARMv5TE.

  • OpenWrt distributes firmware file for ARM architecture on QEMU.
    • However it seems the firmware is for ARMv7 architecture.
  • I also could not find a way to run the system emulator with ARMv5TE-based CPU.

Memory utilization grows significantly beyond max_capacity() under load

Thank you for the excellent work on this interesting project. I've implemented Moka in a project I'm working on to utilize as a memory based cache. I'm passing a struct to bincode, encoding it, and storing it the cache, then running the reverse as needed. The system is a web services that utilizes Actix. In practice things are working fine.

However, when I ran a load generating program wrk against a sample call, the memory utilization of the cache increases considerably and does not appear to respect the max_capacity() setting. It does not decrease after load generation completes and subsequent wrk tests continue to increase the memory footprint likely indefinitely.

As an example:

lazy_static! {
    static ref MOKA_CACHE: MCache<Uuid, CacheItem> = MCache::builder()
        .weigher(|_key, value: &CacheItem| -> u32 {
            //let size = size_of_val(&*value.data) as u32;
            let size = value.data.len().try_into().unwrap_or(u32::MAX);
            size
        })
        .max_capacity(
                 256
                * 1024
                * 1024
        )
        .build();
}

1:
At the start of the load generation:

Top Indicates around 15M of memory utilization:

6488 erp_service_acco 0.0 00:01.19 29 0 50 15M

2:
Executing a wrk request yields

wrk -d 10 -t 8 -c 100 http://localhost:8080/v1/account/customer/test/1

Running 10s test @ http://localhost:8080/v1/account/customer/test/1
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.16ms    1.79ms  48.82ms   99.58%
    Req/Sec    11.23k     0.92k   13.45k    83.66%
  902635 requests in 10.10s, 314.20MB read
Requests/sec:  89370.39
Transfer/sec:     31.11MB

Top now indicates around 449M of memory utilization

6488 erp_service_acco 0.0 01:19.42 45 0 66 449M

3
Executing a longer wrk request again

wrk -d 100 -t 8 -c 100 http://localhost:8080/v1/account/customer/test/1
Running 2m test @ http://localhost:8080/v1/account/customer/test/1
  8 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     4.97ms   12.33ms 306.14ms   92.16%
    Req/Sec     7.77k     4.35k   14.75k    71.28%
  6184606 requests in 1.67m, 2.10GB read
Requests/sec:  61818.19
Transfer/sec:     21.52MB

Top now indicates around 933M of memory utilization

6488 erp_service_acco 0.0 15:53.12 46 0 67 933M

Memory utilization continues to grow into the GB+ range. Based on the max_capacity setting above and the documentation in the readme, I would expect the footprint to be around 256MiB + Actix.

In order to remove Actix as a potential component here, I ran these same tests without any calls against my code that interacts with Moka. Actix does not grow beyond a 21MB footprint.

Data is being inserted into the cache like so:

        let encoded = bincode::serde::encode_to_vec(&cache_data, config)?;
        let cache_item = CacheItem::new(encoded, 120)?;
        cache.insert(key.clone(), cache_item).await;

and retrieved like so:

  let cache_item = match cache.get(key) {
            None => {
                /* removed for simplicity */
            }
            Some(item) => {
                /* removed for simplicity */
                item
            }
        };

        let (decoded, _len): (T, usize) =
            bincode::serde::decode_from_slice(&cache_item.data[..], config)?;

I also repeated the tests without the interaction with bincode.

Is there something I'm misunderstanding on how to restrict the memory utilization of Moka via the max_capacity() (or another setting) configuration?

Any thoughts on this would be greatly appreciated.

moka::future::Cache::get_or_insert_with() panics if previously inserting task aborted

When writing a web server, it appears that hyper::Server can abort tasks, if the requester has gone away. moka::future::Cache does not like that and panics. Is this bug? Or is there a recommended way to deal with it?

Minimized example:

[package]
name = "moka-future-bug"
version = "0.1.0"
edition = "2021"

[dependencies]
moka = { version = "0.6.2", features = ["future"] }
tokio = { version = "1.15.0", features = ["full"] }
use moka::future::Cache;
use std::time::Duration;

#[tokio::main]
async fn main() {
    let cache_a: Cache<(), ()> = Cache::new(1);

    let cache_b = cache_a.clone();

    let handle = tokio::task::spawn(async move {
        cache_b
            .get_or_insert_with((), async {
                tokio::time::sleep(Duration::from_millis(1000)).await;
            })
            .await;
    });

    tokio::time::sleep(Duration::from_millis(500)).await;
    handle.abort();

    cache_a.get_or_insert_with((), async {}).await; // panics!
}

Backtrace:

thread 'main' panicked at 'Too many retries. Tried to read the return value from the `init` \
                                future but failed 200 times. Maybe the `init` kept panicking?', /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-0.6.2/src/future/value_initializer.rs:138:33
stack backtrace:
   0: rust_begin_unwind
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/std/src/panicking.rs:498:5
   1: core::panicking::panic_fmt
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/core/src/panicking.rs:107:14
   2: moka::future::value_initializer::ValueInitializer<K,V,S>::do_try_init::{{closure}}
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-0.6.2/src/future/value_initializer.rs:138:33
   3: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/core/src/future/mod.rs:80:19
   4: moka::future::value_initializer::ValueInitializer<K,V,S>::init_or_read::{{closure}}
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-0.6.2/src/future/value_initializer.rs:53:9
   5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/core/src/future/mod.rs:80:19
   6: moka::future::cache::Cache<K,V,S>::get_or_insert_with_hash_and_fun::{{closure}}
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-0.6.2/src/future/cache.rs:621:15
   7: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/core/src/future/mod.rs:80:19
   8: moka::future::cache::Cache<K,V,S>::get_or_insert_with::{{closure}}
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/moka-0.6.2/src/future/cache.rs:363:9
   9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/core/src/future/mod.rs:80:19
  10: moka_future_bug::main::{{closure}}
             at ./src/main.rs:21:5
  11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/core/src/future/mod.rs:80:19
  12: tokio::park::thread::CachedParkThread::block_on::{{closure}}
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/park/thread.rs:263:54
  13: tokio::coop::with_budget::{{closure}}
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/coop.rs:102:9
  14: std::thread::local::LocalKey<T>::try_with
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/std/src/thread/local.rs:413:16
  15: std::thread::local::LocalKey<T>::with
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/std/src/thread/local.rs:389:9
  16: tokio::coop::with_budget
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/coop.rs:95:5
  17: tokio::coop::budget
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/coop.rs:72:5
  18: tokio::park::thread::CachedParkThread::block_on
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/park/thread.rs:263:31
  19: tokio::runtime::enter::Enter::block_on
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/runtime/enter.rs:151:13
  20: tokio::runtime::thread_pool::ThreadPool::block_on
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/runtime/thread_pool/mod.rs:77:9
  21: tokio::runtime::Runtime::block_on
             at /home/niklas/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.15.0/src/runtime/mod.rs:463:43
  22: moka_future_bug::main
             at ./src/main.rs:21:5
  23: core::ops::function::FnOnce::call_once
             at /rustc/efec545293b9263be9edfb283a7aa66350b3acbf/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Update documentation

The examples reference methods blocking_insert and blocking_invalidate which do not exist. Instead update documentation to reference BlockingOp::insert and invalidate

Investigate Clippy warning about unsound Send impl

Since Rust beta became 1.58, weekly builds have been failing with Clippy warnings. (e.g. CI #579) It can be reproduced locally by running cargo +beta clippy --features future -- -D warnings with the latest beta 1.58.

I did unsafe impl by purpose because I thought it was correct, but now Clippy is saying unsound. Investigate and address these warnings.

$ cargo +beta clippy -V 
clippy 0.1.58 (0e07bcb68b8 2021-12-04)

$ cargo +beta clippy --features future -- -D warnings

...
error: this implementation is unsound, as some fields in `Cache<K, V, S>` are `!Send`
   --> src/future/cache.rs:196:1
    |
196 | / unsafe impl<K, V, S> Send for Cache<K, V, S>
197 | | where
198 | |     K: Send + Sync,
199 | |     V: Send + Sync,
200 | |     S: Send,
201 | | {
202 | | }
    | |_^
    |
    = note: `-D clippy::non-send-fields-in-send-ty` implied by `-D warnings`
note: the type of field `base` is `!Send`
   --> src/future/cache.rs:192:5
    |
192 |     base: BaseCache<K, V, S>,
    |     ^^^^^^^^^^^^^^^^^^^^^^^^
    = help: add bounds on type parameters `K, V, S` that satisfy `BaseCache<K, V, S>: Send`
note: the type of field `value_initializer` is `!Send`
   --> src/future/cache.rs:193:5
    |
193 |     value_initializer: Arc<ValueInitializer<K, V, S>>,
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    = help: add bounds on type parameters `K, V, S` that satisfy `Arc<ValueInitializer<K, V, S>>: Send`
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#non_send_fields_in_send_ty

error: this implementation is unsound, as some fields in `Cache<K, V, S>` are `!Send`
   --> src/sync/cache.rs:168:1
    |
168 | / unsafe impl<K, V, S> Send for Cache<K, V, S>
169 | | where
170 | |     K: Send + Sync,
171 | |     V: Send + Sync,
172 | |     S: Send,
173 | | {
174 | | }
    | |_^
    |
note: the type of field `base` is `!Send`
   --> src/sync/cache.rs:164:5
    |
164 |     base: BaseCache<K, V, S>,
    |     ^^^^^^^^^^^^^^^^^^^^^^^^
    = help: add bounds on type parameters `K, V, S` that satisfy `BaseCache<K, V, S>: Send`
note: the type of field `value_initializer` is `!Send`
   --> src/sync/cache.rs:165:5
    |
165 |     value_initializer: Arc<ValueInitializer<K, V, S>>,
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    = help: add bounds on type parameters `K, V, S` that satisfy `Arc<ValueInitializer<K, V, S>>: Send`
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#non_send_fields_in_send_ty

error: this implementation is unsound, as some fields in `Deques<K>` are `!Send`
  --> src/sync/deques.rs:17:1
   |
17 | unsafe impl<K> Send for Deques<K> {}
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
note: the type of field `window` is `!Send`
  --> src/sync/deques.rs:7:5
   |
7  |     pub(crate) window: Deque<KeyHashDate<K>>, //    Not used yet.
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   = help: add bounds on type parameter `K` that satisfy `Deque<KeyHashDate<K>>: Send`
note: the type of field `probation` is `!Send`
  --> src/sync/deques.rs:8:5
   |
8  |     pub(crate) probation: Deque<KeyHashDate<K>>,
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   = help: add bounds on type parameter `K` that satisfy `Deque<KeyHashDate<K>>: Send`
note: the type of field `protected` is `!Send`
  --> src/sync/deques.rs:9:5
   |
9  |     pub(crate) protected: Deque<KeyHashDate<K>>, // Not used yet.
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   = help: add bounds on type parameter `K` that satisfy `Deque<KeyHashDate<K>>: Send`
note: the type of field `write_order` is `!Send`
  --> src/sync/deques.rs:10:5
   |
10 |     pub(crate) write_order: Deque<KeyDate<K>>,
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   = help: add bounds on type parameter `K` that satisfy `Deque<KeyDate<K>>: Send`
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#non_send_fields_in_send_ty

error: this implementation is unsound, as some fields in `SegmentedCache<K, V, S>` are `!Send`
  --> src/sync/segment.rs:27:1
   |
27 | / unsafe impl<K, V, S> Send for SegmentedCache<K, V, S>
28 | | where
29 | |     K: Send + Sync,
30 | |     V: Send + Sync,
31 | |     S: Send,
32 | | {
33 | | }
   | |_^
   |
note: the type of field `inner` is `!Send`
  --> src/sync/segment.rs:24:5
   |
24 |     inner: Arc<Inner<K, V, S>>,
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^
   = help: add bounds on type parameters `K, V, S` that satisfy `Arc<Inner<K, V, S>>: Send`
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#non_send_fields_in_send_ty

error: could not compile `moka` due to 4 previous errors

Request: Some way to determine if `future::Cache::get_with()` returned cached or fresh value

I want to know if future::Cache:get_with() returned a cached or a fresh value. This is important for logging purposes. I imagine there are workarounds I could do involving heap-allocating a mutable boolean, setting it in the future I pass to the function, and testing it when the function returns, but that's very awkward. It would be great if the cache could simply tell me whether it was fresh or cached.

Moka counter & multiple cache integration

Dear Sir,
I'm a new rustacean so sorry if my questions are a little amateur! I really love the concept of moka and what you've built, it is exactly what I was searching for!

I would like to integrate multiple async / future caches with different TTL & capacities, to run on my actix web server.

I have two key questions I would like to ask:

  1. I would love to use moka futures as a counter, however I noticed that there isn't an alternative for the classic "entry API" found in std::hashmap.
    Like this Reddit example counter.
    The entry API in std::hashmap allows to first search, if not exist insert a default value (e.g start counter at 1) or if it exists, to simply +=1 avoiding two lookups.
    I searched the source code and saw moka cht offers a "insert_or_modify" function which could reproduce this!
    From what I understand, the futures moka library only implements get, get_with and insert, not the moka-cht "insert_or_modify" - so how would I go about approaching building a counter using moka futures?
    Ideally I would love to be able to just do a get and increment the counter if found. However I would be willing to compromise and do a "insert_or_modify" from moka cht library, I'm just not sure how to integrate this into the moka futures library - is insert or modify synchronous? I am assuming it is lower level / upstream?

  2. I noticed to enforce the TTL policy we use a housekeeper that runs on "scheduled thread pool", whilst actix web runs a web server app instance on each thread.
    Does this mean that moka cache can be friendly & share the CPU with the actix web server? I have read web servers are blocking operations, so moka housekeeper can integrate itself into this?
    I know you have stated we are fine for actix-rt, however I just wanted some reassurance that multiple caches, let's say 4 caches for example, would be able to share a single CPU with actix web server app? Would this create 4 unique housekeepers that would all share my single thread with my web server?
    I am assuming the cleanup process would spread out and my cloud provider may not be very happy, but as long as it works I'm happy!

Many thanks for your time, amazing work!!! 🙌🏽

[question] The lazyness of expiration

Hello maintainer, I'd like to know whether the expiration of cache is lazy or active.
By lazy I mean not until getting on the expired entry or other scenarios that have to remove it, the entry will be preserved in the cache;
Corresponsively by active I mean if the entry outlived its TTL or IDLE, the entry will be instantly evicted.
I've also used this crate to build a server-side connection pool, and I hope any outlived connections should be dropped immediately. Nothing about this was in the README, but by invalidate_entries_if() method I guess this cache should be lazy?

expose raw hash APIs

I have a use case where i want to cache an object which is keyed by a pretty large object that is deserialized. calculating the hash for this key is already done as part of deserializing and parsing,and it would be a substantial performance gain to not have to rehash it. i saw that there are in fact functions that are currently pub(crate) such as get_with_hash or insert_with_hash. would it be possible to expose them in the public API?
if you dont want those functions completely public for API stability reasons, it would still be nice to expose them behind a feature flag, for those of us who have some specialized use cases :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.