Giter Site home page Giter Site logo

trie's Introduction

Build Status

Trie

A generic implementation of the Base-16 Modified Merkle Tree ("Trie") data structure, provided under the Apache2 license.

The implementation comes in two formats:

  • Trie DB (trie-db crate) which can be combined with a backend database to provide a persistent trie structure whose contents can be modified and whose root hash is recalculated efficiently.
  • Trie Root (trie-root crate) which provides a closed-form function that accepts a enumeration of keys and values and provides a root calculated entirely in-memory and closed form.

Trie Hash alone is able to be used in no_std builds by disabling its (default) std feature.

In addition to these, several support crates are provided:

  • hash-db crate, used to provide Hasher (trait for all things that can make cryptographic hashes) and HashDB (trait for databases that can have byte slices pushed into them and allow for them to be retrieved based on their hash). Suitable for no_std, though in this case will only provide Hasher.
  • memory-db crate, contains MemoryDB, an implementation of a HashDB using only in in-memory map.
  • hash256-std-hasher crate, an implementation of a std::hash::Hasher for 32-byte keys that have already been hashed. Useful to build the backing HashMap for MemoryDB.

There are also three crates used only for testing:

  • keccak-hasher crate, an implementation of Hasher based on the Keccak-256 algorithm.
  • reference-trie crate, an implementation of a simple trie format; this provides both a NodeCodec and TrieStream implementation making it suitable for both Trie DB and Trie Root.
  • trie-standardmap crate, a key/value generation tool for creating large test datasets to specific qualities.
  • trie-bench crate, a comprehensive standard benchmarking tool for trie format implementations. Works using the criterion project so benchmarking can be done with the stable rustc branch.

In the spirit of all things Rust, this aims to be reliable, secure, and high performance.

Used in the Substrate project. If you use this crate and would your project listed here, please contact us.

Buidling &c.

Building is done through cargo, as you'd expect.

Building

cargo build --all

Testing

cargo test --all

Benchmarking

cargo bench --all

Building in no_std

cargo build --no-default-features

trie's People

Contributors

0x7cfe avatar andresilva avatar arkpar avatar ascjones avatar bkchr avatar cheme avatar debris avatar dependabot[bot] avatar dvdplm avatar gavofyork avatar gballet avatar gnunicorn avatar gui1117 avatar hawstein avatar jimpo avatar koushiro avatar koute avatar montekki avatar ngotchac avatar niklasad1 avatar nikvolf avatar ordian avatar pepyakin avatar rphmeier avatar snd avatar sorpaas avatar tafia avatar tomaka avatar tomusdrw avatar twittner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trie's Issues

Improve proof size

At some point, I think the trie produced inclusion proofs which spelled out all other 15 nodes of the radix 16 tree layers, which waists like half the space in PoV blocks. We instead want Merkle roots to be computed as a a binary tree, so the proofs can be sent for the binary tree form, but the tree can still be stored on disk as a radix 16 tree.

Consider changing MemoryDB::new to take no arguments

See paritytech/substrate#1427. MemoryDB is different from a lot of Rust storage types in that the new function takes an argument. I think it would make more sense for MemoryDB::new to construct an empty DB, alike, MemoryDB::default. A new from_data function could be added that replaces the old new function:

pub fn from_data(data: &'a [u8]) -> Self {
    Self::from_null_node(data, data.into())
}

How to run a sample example?

I am unable to execute this test:

use hash_db::Hasher;
use reference_trie::{RefTrieDBMut, RefTrieDB};
use trie_db::DBValue;
use keccak_hasher::KeccakHasher;
use memory_db::*;

fn main() {
    let mut memdb = MemoryDB::<KeccakHasher, HashKey<_>, _>::default();
    let mut root = Default::default();
    RefTrieDBMut::new(&mut memdb, &mut root).insert(b"foo", b"bar").unwrap();
    let t = RefTrieDB::new(&memdb, &root);
    assert!(t.contains(b"foo").unwrap());
    assert_eq!(t.get(b"foo").unwrap().unwrap(), b"bar".to_vec());
}

But I get this error:

 --> src/main.rs:10:19
   |
10 |     RefTrieDBMut::new(&mut memdb, &mut root).insert(b"foo", b"bar").unwrap();
   |                   ^^^ function or associated item not found in `trie_db::triedbmut::TrieDBMut<'_, ExtensionLayout>`

And Indeed trie_db::triedbmut::TrieDBMut doesn't have a new method. Any help?

Use hashbrown instead of hashmap_core

This issue is keeping trace of the fact that we use hashmap_core for trie hashmap, but want to switch to another no_std implementation in the future.

Best candidate is probably hashbrown (already use in some other parity crates), but in the current state the way no_std is managed makes thing pretty awkward (would requires adding some cargo features).

So current plan is more a wait and see (hashbrown seems to be in process to be use in the rust std lib maybe wait for it before seeing how to make this switch this properly).

When switching parity-util-mem could also implement malloc_size_of for the chosen 'no_std' implementation.

README.md isn't shown on crates.io

Figured that for any users of this crate it would be important for that to be there, as many people (myself included) tend to just dismiss crates that don't have a readme

eBPF stack trace trie design

https://github.com/iovisor/bcc/blob/master/tools/profile.py (what Pyroscope uses under the hood) is a far from optimal logging format for stack traces. I have a design in mind for a binary logger to save network bandwidth when exporting traces.

The logger would emit references to immutable trie pointers. When done, or when the trie overfills the memory limit - it would dump the trie tagged with the memory pointers.

Does this sound feasible? A more compressed version would have base64 counters on each trie node.

Root hash mismatch when compared to other libraries.

Hello,

We're developing a patricia merkle tree and would like to compare our implementation with yours (both for testing and benchmarking). However we've found that the computed hash from your library doesn't match the ones computed from other libraries (including ours).

Hashing mismatch example.
//! # Basic example.
//!
//! Target hash extracted from here:
//!   https://github.com/ethereum/tests/blob/develop/TrieTests/trietest.json#L97-L104
//!
//! ## Dependencies
//!
//! ```toml
//! [dependencies]
//! cita_trie = "4.0.0"
//! hasher = "0.1.4"
//! hex-literal = "0.3.4"
//! memory-db = "0.31.0"
//! reference-trie = "0.26.0"
//! sha3 = "0.10.6"
//! trie-db = "0.24.0"
//!
//! [dependencies.patricia-merkle-tree]
//! git = "https://github.com/lambdaclass/merkle_patricia_tree"
//! ```

use cita_trie::{MemoryDB, PatriciaTrie, Trie};
use hasher::HasherKeccak;
use hex_literal::hex;
use patricia_merkle_tree::PatriciaMerkleTree;
use sha3::Keccak256;
use std::sync::Arc;

fn main() {
    const DATA: &[(&[u8], &[u8])] = &[(b"abc", b"123"), (b"abcd", b"abcd"), (b"abc", b"abc")];
    const HASH: [u8; 32] = hex!("7a320748f780ad9ad5b0837302075ce0eeba6c26e3d8562c67ccc0f1b273298a");

    println!("Expected   : {HASH:02x?}");
    println!("Our    hash: {:02x?}", run_ours(DATA.iter().copied()));
    println!("Cita   hash: {:02x?}", run_cita(DATA.iter().copied()));
    println!("Parity hash: {:02x?}", run_parity(DATA.iter().copied()));
}

fn run_ours<'a>(data: impl Iterator<Item = (&'a [u8], &'a [u8])>) -> [u8; 32] {
    let mut trie = PatriciaMerkleTree::<_, _, Keccak256>::new();

    data.for_each(|(p, v)| {
        trie.insert(p, v);
    });

    trie.compute_hash().as_slice().try_into().unwrap()
}

fn run_cita<'a>(data: impl Iterator<Item = (&'a [u8], &'a [u8])>) -> [u8; 32] {
    let mem_db = Arc::new(MemoryDB::new(true));
    let hasher = Arc::new(HasherKeccak::new());

    let mut trie = PatriciaTrie::new(mem_db, hasher);

    data.for_each(|(p, v)| {
        trie.insert(p.to_vec(), v.to_vec()).unwrap();
    });

    trie.root().unwrap().try_into().unwrap()
}

fn run_parity<'a>(data: impl Iterator<Item = (&'a [u8], &'a [u8])>) -> [u8; 32] {
    use memory_db::{HashKey, MemoryDB};
    use reference_trie::ExtensionLayout;
    use trie_db::{NodeCodec, TrieDBMutBuilder, TrieHash, TrieLayout, TrieMut};

    let mut mem_db =
        MemoryDB::<_, HashKey<_>, _>::new(<ExtensionLayout as TrieLayout>::Codec::empty_node());
    let mut root = <TrieHash<ExtensionLayout>>::default();

    let mut trie = TrieDBMutBuilder::<ExtensionLayout>::new(&mut mem_db, &mut root).build();

    data.for_each(|(p, v)| {
        trie.insert(p, v).unwrap();
    });

    trie.commit();
    *trie.root()
}

After some investigation we've found that although the hashing algorithm is the same (given the same inputs will produce the same outputs), the inputs to the hashing function are different. As per the ethereum wiki on patricia merkle tries we've used RLP encoding on our implementation, and also found references to it in yours.

When trying to decode (using RLP) the data passed to your hash function we've found that it can't be RLP (or at least not the RLP on the ethereum wiki) since it didn't make sense. For example, attempting to decode an RLP list with a few exabytes of data.

Do you use a specific (non-ethereum RLP) encoding that fits your needs? Can it be changed to use ethereum's RLP?

Thanks.

Trie-db lookup fails to find value in db after trie commit when using Ethereum RLP node codec.

Hey team, I've been banging my head trying to get proofs generated for Ethereum RLP leaf nodes. I've created a custom trie layout that specifies the Ethereum rlp node codec. I'm using the generate proof function in the trie-eip1186 module. I am able to successfully insert entries and retrieve them within the same scope but once the trie drops out of scope, I cannot retrieve the inserted entry.

I noticed the encoded root node value in the db after the trie.commit contains the value I just inserted but for some reason, the key does not retrieve it.

hashed_key = 798c6047767c10f653ca157a7f66a592a1d6ca550cae352912be0b0745336af

rlp_encoded_value = f8440180a00f460850d9716af3371839ff600d3d57ce12da330e95ac16f91da485fd8bd6c6a01e1706bdc2b9de10c4075b84a6181920bb73d94a161cb8044fc5d1c800030627

encoded_root_node = f869a0798c6047767c10f653ca157a7f66a592a1d6ca550cae352912be0b0745336afdb846f8440180a00f460850d9716af3371839ff600d3d57ce12da330e95ac16f91da485fd8bd6c6a01e1706bdc2b9de10c4075b84a6181920bb73d94a161cb8044fc5d1c800030627

I've noticed several issues and threads about this and no one seems to have the complete solution. I know the go-ethereum project has this working but it's using an ancient version of triedb. I feel like I'm super close and would love to publish a working example once it's working. Any help on this would be greatly appreciated. ๐Ÿ™๐Ÿผ

Not able to retrieve value for given key when new key pair inserted

#[test]
fn test_two_assets_memory_db() {
	let mut memdb = MemoryDB::<BlakeTwo256>::new(&[0u8]);
	let mut root = H256::zero();

	let mut state = TrieDBMutBuilderV1::new(&mut memdb, &mut root).build();

	let key1 = [1u8;3];
	let data1 = [1u8;2];
	state.insert(key1.as_ref(), &data1).unwrap();
        assert_eq!(state.get(key1.as_ref()).unwrap().unwrap(),data1); //PASSING
	let key2 = [2u8;3];
	let data2 = [2u8;2];
	state.insert(key2.as_ref(), &data2).unwrap();
	assert_eq!(state.get(key1.as_ref()).unwrap().unwrap(),data1); //ERROR:- 'tests::test_two_assets_memory_db' panicked at 'called `Option::unwrap()` on a `None` value.
	state.commit();
}

The test case test_two_assets_memory_db fails at the line assert_eq!(state.get(key1.as_ref()).unwrap().unwrap(),data1); when trying to fetch and assert the data associated with key1 after the second insertion operation. It appears that after the insertion of data associated with key2, the data tied to key1 is either removed or not accessible, leading to the Option::unwrap() call on a None value, which is causing the test to panic. The state database should ideally support multiple insertions and retrievals, however, in this instance, it fails to retrieve the value associated with key1 after a new key-value pair (key2, data2) has been inserted, could you please help me with this?

HashDB on disk example

This is an awesome project and appreciate the amazing work.

Can someone help me understand how I can store a HashDB on disk? Iโ€™ve been using the MemoryDB but would like to persist the data.

Are there any examples using HashDB with a KV? How can I use TrieDB to store data and keep an up to date root?

Possible batch update of values

When updating multiple value, calling 'insert' multiple time on a 'triedbmut' does not look as efficient as doing it in a single pass.
With a batch update, we can avoid putting the whole trie changes in memory.
This kind of batch update is only possible if values to change are sorted.
It does not require caching node values like current triedbmut does (just a stack of node).
It is basically just moving the sort done by unordered triemut insertion to a previous processing.

In substrate the trie is build by 'storage_root' call https://github.com/paritytech/substrate/blob/2ac4dd8a7e116c40a80bf443d942da7163cad8ca/primitives/state-machine/src/trie_backend.rs#L153 , relying on triedbmut multiple calls to 'insert':
https://github.com/paritytech/substrate/blob/2ac4dd8a7e116c40a80bf443d942da7163cad8ca/primitives/trie/src/lib.rs#L133 .
If keys in substrate change overlay were to be stored in a sorted manner at an additional cost (an hypothesis for paritytech/substrate#4185), this cost could probably be leveraged by having a batch insertion in trie afterward (it needs a bench to evaluate what kind of improvement we can get from a batch insertion implementation, it should not be much, but neither should be the loss from sorting key in substrate change overlay).

Non optimal TrieDBMut key values queries on batch processing.

paritytech/substrate#6780 did reveal that different input order of changes when processing a root with triedbmut can result in a slightly different query plan and thus different proof of execution.

The root cause is that triedbmut algorithm do 'fuse a branch without value and a single child with this single child' too eagerly.

The operation is applied in function fix and do access the single child.

If later we add a children to the deleted (when fused) branch, the deleted branch can be restored and there will be no use to query the single child.

So we end up querying an unneeded node in respect to a batch update.
This is not really bad as the hash of the fuse node is not calculated (hash are calculated lazily), but it ends up being an issue with the way we register proof in substrate (we store all node queried on the kv backend).

A first way to solve this should be to apply fix lazily (on root calculation) as we do with hash calculation and db writing, but it seems to me that this will require ordering the fix calls and is not as simple as insert node into memorydb.

A second way should be to rewrite the batch update.
I would propose/illustrate with (see #110 draft pr), to use a batch update that works on a sorted change and thus allow applying the node fusing only when we are sure no other change will be done on the branch (when exiting it). This kind of batch update is also good when looking at memory footprint (only a stack of max 16 nodes needs to be kept in memory), but it is not really relevant in substrate use case.

A third way would be to add the additional fix related query to substrate spec.
This is not a sound idea, specifying that the proof should be the strictly the required set of trie nodes to run the operation seems more correct to me and would make conflict with other implementation easier to rules out.

Create an iterator for a prefix

Substrate currently query trie nodes by prefix.
This operation is using triedb iterator 'seek' at prefix, then calling iterator 'next' until a value does not begin with a prefix.
This is problematic in case the operation is run over a proof: the proof/witness will need it include query to this last value (which is not strictly needed).
A more compact/correct proof is doable by limiting the iteration only to the branches that contains the prefix.

This operation will simply be a variant of 'iter', likely 'iter_prefix'.

Make a strict specification and 1.0 release

Since trie logic is now part of the consensus, to avoid future reimplementation of similar prs (#42), implementation should follow strict specification and then never change in observable way.

Is there any docs or words to explain way pr#142 change the leaf node from storing normal value to a hashed node?

relate to #142 and substrate pr paritytech/substrate#9732
I don't know why you design this feature, to change the leaf node from storing a normal value to a hashed node related to threshold.
This change causes a migration for substrate node, but I do not understand this feature is necessary.

It seems that this feature uses one more hash to bring less cause for calculating state root? or something else?

I do not find any discuss for this. Can you explain more reason for this feature?

Guard against branch with one child and no value

When decoding nodes we should not allow branch with one child and no value.
This change requires changing the codec trait to allow the compact proof decoding in trie-db/src/proof/verify.rs (internally this proof can skip the value).

The `reference-trie` version 0.29.1 on crates.io refers to an older version.

The reference-trie version 0.29.1 on crates.io is using an older version of trie-db (0.27.0), which is inconsistent with the version specified in the Cargo.toml file. This inconsistency may lead to ambiguity when importing external crates.

https://crates.io/crates/reference-trie/0.29.1/dependencies

trie-db = { path = "../../trie-db", default-features = false, version = "0.28.0" }

trie-db can not be used as crate dependencies after 0.19.2

How to reproduce

  • create a new project by cargo, ie cargo init --bin <project_name>
  • add trie-db as a part of dependencies in Cargo.toml
  • cargo build will fail in these versions, 0.20, 0.21 with the following error
  • It can be build with version 0.19.2
error[E0599]: no function or associated item named `new` found for struct `hashbrown::set::HashSet<_, _>` in the current scope
   --> /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/trie-db-0.21.0/src/triedbmut.rs:442:24
    |
442 |             death_row: HashSet::new(),
    |                                 ^^^

error[E0599]: no function or associated item named `new` found for struct `hashbrown::set::HashSet<_, _>` in the current scope
   --> /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/trie-db-0.21.0/src/triedbmut.rs:463:24
    |
463 |             death_row: HashSet::new(),
    |                                 ^^^

error[E0599]: no method named `insert` found for struct `hashbrown::set::HashSet<(<<L as TrieLayout>::Hash as hash_db::Hasher>::Out, (smallvec::SmallVec<[u8; 36]>, std::option::Option<u8>))>` in the current scope
   --> /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/trie-db-0.21.0/src/triedbmut.rs:519:21
    |
519 |                     self.death_row.insert((hash, current_key.left_owned()));
    |                                    ^^^^^^
    |
   ::: /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/hashbrown-0.6.3/src/map.rs:17:1
    |
17  | pub enum DefaultHashBuilder {}
    | --------------------------- doesn't satisfy `_: std::hash::BuildHasher`
    |
    = note: the method `insert` exists but the following trait bounds were not satisfied:
            `hashbrown::map::DefaultHashBuilder: std::hash::BuildHasher`

error[E0599]: no method named `insert` found for struct `hashbrown::set::HashSet<(<<L as TrieLayout>::Hash as hash_db::Hasher>::Out, (smallvec::SmallVec<[u8; 36]>, std::option::Option<u8>))>` in the current scope
   --> /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/trie-db-0.21.0/src/triedbmut.rs:523:21
    |
523 |                     self.death_row.insert((hash, current_key.left_owned()));
    |                                    ^^^^^^
    |
   ::: /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/hashbrown-0.6.3/src/map.rs:17:1
    |
17  | pub enum DefaultHashBuilder {}
    | --------------------------- doesn't satisfy `_: std::hash::BuildHasher`
    |
    = note: the method `insert` exists but the following trait bounds were not satisfied:
            `hashbrown::map::DefaultHashBuilder: std::hash::BuildHasher`

error[E0599]: no method named `insert` found for struct `hashbrown::set::HashSet<(<<L as TrieLayout>::Hash as hash_db::Hasher>::Out, (smallvec::SmallVec<[u8; 36]>, std::option::Option<u8>))>` in the current scope
    --> /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/trie-db-0.21.0/src/triedbmut.rs:1269:24
     |
1269 |                                 self.death_row.insert((
     |                                                ^^^^^^

     |
    ::: /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/hashbrown-0.6.3/src/map.rs:17:1
     |
17   | pub enum DefaultHashBuilder {}
     | --------------------------- doesn't satisfy `_: std::hash::BuildHasher`
     |
     = note: the method `insert` exists but the following trait bounds were not satisfied:
             `hashbrown::map::DefaultHashBuilder: std::hash::BuildHasher`

error[E0599]: no method named `insert` found for struct `hashbrown::set::HashSet<(<<L as TrieLayout>::Hash as hash_db::Hasher>::Out, (smallvec::SmallVec<[u8; 36]>, std::option::Option<u8>))>` in the current scope
    --> /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/trie-db-0.21.0/src/triedbmut.rs:1353:23
     |
1353 |                             self.death_row.insert(
     |                                            ^^^^^^
     |
    ::: /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/hashbrown-0.6.3/src/map.rs:17:1
     |
17   | pub enum DefaultHashBuilder {}
     | --------------------------- doesn't satisfy `_: std::hash::BuildHasher`
     |
     = note: the method `insert` exists but the following trait bounds were not satisfied:
             `hashbrown::map::DefaultHashBuilder: std::hash::BuildHasher`

error[E0599]: no method named `insert` found for struct `hashbrown::set::HashSet<(<<L as TrieLayout>::Hash as hash_db::Hasher>::Out, (smallvec::SmallVec<[u8; 36]>, std::option::Option<u8>))>` in the current scope
    --> /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/trie-db-0.21.0/src/triedbmut.rs:1372:23
     |
1372 |                             self.death_row.insert((hash, (child_prefix.0[..].into(), child_prefix.1)));
     |                                            ^^^^^^
     |
    ::: /home/yanganto/.cargo/registry/src/github.com-1ecc6299db9ec823/hashbrown-0.6.3/src/map.rs:17:1
     |
17   | pub enum DefaultHashBuilder {}
     | --------------------------- doesn't satisfy `_: std::hash::BuildHasher`
     |
     = note: the method `insert` exists but the following trait bounds were not satisfied:
             `hashbrown::map::DefaultHashBuilder: std::hash::BuildHasher`

error: aborting due to 7 previous errors

For more information about this error, try `rustc --explain E0599`.
error: could not compile `trie-db`.

Thread Safe

(dyn trie_db::TrieCache<sp_trie::NodeCodec<KeccakHasher>> + 'static) cannot be sent between threads safely

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.