Giter Site home page Giter Site logo

urkel's People

Contributors

chjj avatar nijynot avatar nodech avatar pinheadmz avatar realcodywburns avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

urkel's Issues

Error: ENOENT: no such file or directory, uv_os_get_passwd

I get this error when trying to run hsd in a systemd service. The error is thrown on this line.

From googling around I see some similar issues where this can occur if the user in question does not have an entry in /etc/passwd:

Full Stack Trace

os.js:272
    throw new ERR_SYSTEM_ERROR(ctx);
    ^
SystemError [ERR_SYSTEM_ERROR]: A system error occurred: uv_os_get_passwd returned ENOENT (no such file or directory)
    at Object.userInfo (os.js:272:11)
    at Object.<anonymous> (/nix/store/6ndjrl5111qxdiz2wpvb0v30msra0ypk-node_hsd-2.2.0/lib/node_modules/hsd/node_modules/urkel/lib/radix/mfs.js:21:8)
    at Module._compile (internal/modules/cjs/loader.js:1137:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)
    at Module.load (internal/modules/cjs/loader.js:985:32)
    at Function.Module._load (internal/modules/cjs/loader.js:878:14)
    at Module.require (internal/modules/cjs/loader.js:1025:19)
    at require (internal/modules/cjs/helpers.js:72:18)
    at Object.<anonymous> (/nix/store/6ndjrl5111qxdiz2wpvb0v30msra0ypk-node_hsd-2.2.0/lib/node_modules/hsd/node_modules/urkel/lib/radix/store.js:17:13)
    at Module._compile (internal/modules/cjs/loader.js:1137:30) {
  code: 'ERR_SYSTEM_ERROR',
  info: {
    errno: -2,
    code: 'ENOENT',
    message: 'no such file or directory',
    syscall: 'uv_os_get_passwd'
  },
  errno: [Getter/Setter],
  syscall: [Getter/Setter]
}

Support range proofs

Nice work on this repo, seems like the merklized state storage backend is the transaction processing bottleneck in pretty much every blockchain project.

I'm thinking of replacing my own LevelDB-based tree implementation in my module merk with urkel, although it would be nice if it supported generating/verifying range proofs (e.g. you query with a start key and end key, and get a proof of all the keys inclusively in the range (or before or after the range if the start or end keys aren't an exact match)).

From there, it would be pretty easy to implement a wrapper on top of urkel which implements the leveldown interface, but that's another story.

Root tree cache and proofs

Getting old tree Root Hashes are only necessary when we need to generate proofs. That request even though is rate limited should have consistent response times and not depend on the state of the tree.

In the current released version, the first request of the incorrect root hash (or really old one) will prompt the tree to lookup all previous root hashes on the disk until the hash is found or we looked up everything. This request will take some time and will be noticeably slow. After first run, root cache will have all the root hashes in the tree and the lastMeta (used as indexing pointer in this case) will be set to the first one. (never change) This makes subsequent requests to the incorrect/old root hashes quick, we either have the hash in cache or not. lastMeta is already at the oldest and wont move further in the past, resulting in quick response time.

Even though it's the one request, I believe caching should be done on open instead and have consistent response times for the proofs.

Recent fix to lastMeta (#16), which makes lastMeta consistent with its name and updates it every time new root hash is committed, makes this issue more frequent (every 36 blocks we can get 1 slow lookup for incorrect root hash).

Even though we could easily fix #16 by introducing separate meta for indexing/caching, I believe instead we should change how we approach the look ups in the first place.

We can use cache as the main source of truth for the tree root hashes in the past and never go through disk twice and go to the disk when opening the tree. We can optionally add limitations (additional parameter, e.g. initialRootHashCacheCount=100) and only fixed number of hashes and ignore any root hash past that. (ofc initialRootHashCacheCount=-1 - index everything)

This problem was discovered by @pinheadmz.

Using `urkel` optimized lib with smaller key sizes

Wondering if it's possible to use keys of smaller size, i.e. keys that are less than 32 bytes.
You could pad the end of the keys before inserting them - is that a viable option?

For example, an urkel tree that takes 20 byte keys should not look any different from an urkel tree that takes 32 byte keys, given that you insert 20 byte keys padded with 12 bytes at the end? So I imagine there would not be significant performance gains to reducing the actual key length in the code?

Major design change

We could possibly switch to all synchronous read calls (this improves performance a lot by taking pressure off the thread pool). To avoid blocking the main thread, the tree could run as a worker process or on its own thread (with node-webworker-threads).

In reality, the urkel tree should just be rewritten in C and bound to. We could batch multiple operations and dispatch everything in the thread pool at once.

Proof fields not used.

proof.isSane() checks for nil values for the type deadend. Is there even a need for this? Looking through the code it appears there's no possible way for key,value,hash to ever be set for deadend. Is it simply good enough to check for the type without testing the unused fields? Or are these fields placeholders for future reference?

Consider switching to the merklix tree

There's a merklix tree implementation on the merklix branch.

You could say the way the urkel tree currently handles bit collisions is naive, but in benchmarks, there are no noticeable differences between the urkel tree and a merklix tree in terms of performance or storage. The average case proof size is also identical.

Still, we have to consider the possibility that people may try to DoS the tree. In this case, the merklix tree holds up better.

So, I'm hesitating here for two reasons:

  1. This adds massive complexity to the implementation of the tree and proof verification.
  2. In order to reduce storage, nodes have to be stored as a variable size. I've been given advice to try a "merklix-2" or "merklix-4" tree (where only 2/4 colliding bits are stored on internal nodes, thus allowing us a constant size node), but I find this even more convoluted since it basically requires combining both methods of handling bit collisions. This also adds complexity with a new "pointer" system you'll notice on the merklix branch.
  3. As a result of the last issue, memory usage increases a lot, I want to say it's nearly double, since we now have a lot more properties/objects in memory to track size/bits/etc.

If we can solve the memory issue, I have no issue with using our merklix implementation.

Opening this up to discussion.

What's the purpose of the meta file?

This is really great work! I've started a port in go to get a better understanding of the code base.

What's the purpose of the separate meta file? It appears that it's only opened and closed to read/write a key on start-up. However (if I'm reading the code correctly) any new meta information on commit is written to the actual log file. I don't see where the meta file itself is updated on subsequent commits. Is the purpose of the meta file simply to hold the key? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.