Giter Site home page Giter Site logo

nethermindeth / paprika Goto Github PK

View Code? Open in Web Editor NEW
83.0 6.0 9.0 2.02 MB

An experimental storage for Nethermind, removing the whole Trie abstraction and acting as a Trie and a database at once

License: GNU Lesser General Public License v3.0

C# 100.00%
csharp database dotnet ethereum merkle-tree

paprika's Introduction

🌶️ Paprika

Paprika provides a custom implementation of the Patricia tree used in Ethereum. It aims at replacing the underlying storage as a solution on a higher level of abstraction.

Project

The project is split into milestones. The actual work is managed using Paprika GitHub Project.

Design

Visit the design document for all the design and implementation-specific information about Paprika.

Benchmarks

Benchmarks are described and provided in the benchmarks document.

Contributing

It's great that you want to contribute your time and effort to Paprika! Please take a look at good first issues. They tend to provide a lower entry level. Do not forget to take a look at the design document.

Before you start to work on a feature or fix, please read and follow our contribution guide to help avoid any wasted or duplicate effort.

License

Nethermind is open-source software licensed under the LGPL-3.0.

paprika's People

Contributors

benaadams avatar emlautarom1 avatar omahs avatar scooletz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

paprika's Issues

Benchmark storage related opcodes

The cost of an SSTORE dependends on the existing value and the value to be stored:

  1. Zero vs. nonzero values - storing nonzero values is more costly than storing zero
  2. The current value of the slot vs. the value to store - changing the value of a slot is more costly than not changing it
  3. "Dirty" vs. "clean" slot - changing a slot that has not yet been changed within the current execution context is more costly than changing a slot that has already been changed

Consider this as each SSTORE will always require loading the value first to estimate the gas price. Should there be a bloom filter or something?

Make root page fanout to 256

Introduce a fanout of 256 at the root level. This should help to make the tree shorter. Needs to be checked with abandoned pages, how many slots are occupied there. Add metric for this.

Storage pages, addressing, splits and updates

Introduce a basic storage page, not metadata one. Consider the following properties to add to the header:

  1. tx_id - id of the transaction that wrote or is writing the page
  2. page type - the type of the page (for state only now)

Page types that are potentially good:

  1. data16 - has nibble addressing in the page so less fan out but more data on the page
  2. data256 - has byte addressing that allows greater fan out but less data on the page

Each page uses frame as a bucket for account data. As there's no more than 64 frames in page, a single long with BitOperations can be used to navigate it.

The following should be taken into consideration

  1. Maximum RLP serialized account for foreseeable future size is 85 bytes
    1. this works both for EOA and Contracts
    2. the following test was used to prove it https://github.com/NethermindEth/nethermind/blob/paprika-playground/src/Nethermind/Nethermind.Core.Test/Encoding/PaprikaTests.cs
    3. this value is not changed within a few years
    4. if alignment to long is desirable but some space should be left for additional data, a good aligned value is 96 bytes (16 * 6). It leaves 11 bytes for additional data:
      1. length of the RLP (1 byte)
      2. 10 additional bytes
    5. additionally, leafs have a part of a key that is 32 bytes which gives in total 96 bytes + 32 bytes = 128 bytes
    6. 128 bytes is an upper boundary for a bucket of key→value
    7. this 128 bytes will be called a frame (slot is a reserved keyword for storage).
    8. a single 4kb page with no management overhead could host up to 32 frames
    9. tests with slimFormat (see: [AccountDecoder](https://github.com/NethermindEth/nethermind/blob/master/src/Nethermind/Nethermind.Serialization.Rlp/AccountDecoder.cs#L20)) show that this can greatly reduce the amount of memory taken by RLP making a single account take 17 bytes. This could mean that frames could be cut in half, to make them 48 bytes long
      1. this would mean that a shorter frame, if left 11 bytes for the overhead as in bigger one can provide 37 bytes for the actual RLP which should be more than enough for regular accounts
      2. this would make a single 4kb page with no management overhead could host up to short 64 frames, which can be tracked with bits of a single long

Enhance Paprika with batch metrics

Enchance Paprika with batch metrics like:

  1. number of allocated pages total
  2. number of new allocated pages
  3. histogram of tree depths accesses (reads/writes)

Introduce dirty tracking

The State tree needs to have its root hash recalculated at the commit of each block. To make it efficient, only paths that were changed during the given block should be recalculated. This requires to have an ability to memoize which paths were accessed, and act on this when recalculating the root.

There are two options to track the nodes that were amended during the given block. Track it in the node or the page, or track it elsewhere. The first option requires storing or cleaning additional metadata related to each key. The second, when a list of keys that were changes is kept in memory is much less demanding. The component that holds the dirty information can be called a DirtyMap.

If DirtyMap was stored alongside the block, and pointed from the RootPage, it would in the future building an archive engine that would recognize what nodes were changed (on the DirtyMap basis) and reindex them, before Paprika overrides it when the block moves beyond Max Reorganization Boundary.

Make the file access Random

Ensure that the files that are used are opened with FileOptions.RandomAccess and other flags that might be needed (see #35). As the data are accessed in a non-equential way, having them read as they are sequential may result in polluting the page cache and decrease the overall performance of Paprika.

SlottedArray: empty storage path optimization

There are cases where there's no use in storing NibblePath.Empty that encoded, takes 1 byte of value 0. This can be approached in two ways. Either NibblePath.Empty should be changed so that it encoded always to Span<byte>.Empty or SlottedArray could use Key.Type and store the storage path only for Storage and Merkle. This could impact the disk size and increase packaging of data.

Clean up solution from Tree projects

Currently solution consists Paprika and Trie projects. The Trie projects are obsolete and the solution should be left only with Paprika, Paprika.Tests and Paprika.Runner. Crypto should be aligned with Paprika and can be left as a separate project but does not have to.

Merkle handling in PagedDb

Amend PageDb component so that it support Merkle construct.

  • revisit existing maps (NibbleBasedMap and HashingMap) to see whether they fit the nature of Merkle, that can have keys and values pretty small, when comparing with others #131
  • leave some allowance on each internal Page to keep the MPT. Calculate how much memory will be needed and get it from the in-page cache component. This will limit the memory a bit, but will fix a blow up on truncating the path. #134
  • allow applying deletes (set operation with value of Span<byte>.Empty) and propagate it in the tree #138
  • Massive Storage Tree should search for Merkle keys as well, so that the Merkle information about the storage is also extracted #138
  • apply #118 by allowing the root of MPT be held in the first DataPage

Additional notes, covered by the points above. Keys are much shorter than other keys used in the PagedDb. As PagedDb is path based, it uses full addresses of accounts and account+storage for storage. As it truncates one nibble per level of the page tree, it requires to leave some space for Merkle values as its keys will be much shorter, for example for the root, they will be of length of 0.

Archive storage

Once #26 is provided, each batch will contain information about changes it contains. This information can be retrieved from a block using a readonly batch and can be queried to what values given keys were changed. As the next step the change set can be stored in a separate storage (appendonly, dense medium). This would result in a mixed medium storage that:

  • for blocks younger than Max Depth Reorg - Paprika is used for providing the state
  • for blocks older than Max Depth Reorg - the archive is used

Remarks:

  1. the archiving is done in parallel with regular writes as it uses the existing reader transactions introduced in #5 . No need to amend anything
  2. the storage should be designed separately so that it's dense
  3. potentially, a separate Paprika could be grown used so that it never deletes or overwrites but just stores values by their composite, combined keys:
    1. [account-hash][block-number] for storing account information
    2. [account-hash][storage-slot][block-number] for storing storage information

Data corruption with bigger runs of Paprika.Runner

After merging #134 and running Paprika.Runner several times, occasionally, the data are corrupted. It requires bisecting or a stronger review what's up with it, whether it's a dirty file (wrong cleaning). It looks like the memory pager is just fine but required digging.

Verify for key components for Paprika v1

Introduce snapshot verification for Key components of Paprika. Use Verify with its support for binary comparison to make sure that all key components of Paprika v1 are captured and covered.

  • install Verify
  • provide a v1 directory for snapshots and tests aligned with the current version defined here
  • cover:
    • NibblePath, read/write, GetHashCode
    • SlottedArray, at least one case for each key type

Reenable Paprika.Runner

Currently Paprika.Runner does not run. Realign it to the new API and provide a runner with a meaningful number of operations so that they can be used to profile performance.

Requires: #9

Parallelize Merkle construct

Introduce parallel computation in Merkle construct so that it can be computed much faster.

  • implement custom ICommit that can be created wrapping the original one.
    • the wrapper of the ICommit is required that sets data locally, but default to reading through to the original commit if data not found
    • this, should be based on Page pool so that no allocations happen here
    • use existing maps to implement it
  • wrap each storage computation (see: below) in with this wrapper, gather results, set accounts
  • consider introducing state tree parallelism. It will be harder as it will require sprinkling parallelism in the tree creation

foreach (var accountAddress in storage.AccountsWithModifiedStorage)
{
prefixed.SetPrefix(accountAddress);
// compute new storage root hash
var keccakOrRlp = Compute(Key.Merkle(NibblePath.Empty), prefixed, TrieType.Storage);
// read the existing account
var key = Key.Account(accountAddress);
using var accountOwner = commit.Get(key);
Account.ReadFrom(accountOwner.Span, out var account);
// update it
account = account.WithChangedStorageRoot(new Keccak(keccakOrRlp.Span));
// set it
commit.Set(key, account.WriteTo(accountSpan));
}

Pooling for write-batches

Write batches should pool everything that they use, so that the amount of memory allocated per batch is greatly limited.

Optimize Account format

Currently, when serializing Account, the Nonce and Balance store their lengths using full byte. Taking into consideration total Eth supply and how often nonce is bumped, both values should be capped at UInt128 or less. Provide a way of storing accounts in an optimized way, where a single byte is used to encode both lengths:

  • 0b1000_0000 - mask for marking the special encoding
  • 0b0111_0000 - mask for storing the number of bytes for nonce (up to 7 bytes)
  • 0b0000_1111 - mask for storing the number of bytes for balance (up to 15 bytes)

This should limit the account size by 1, which is ~2.5% of the total size and can greatly reduce the tree size.

Introduce BenchmarkDotNet

Introduce BenchmarkDotNet

  • add a separate project with BenchmarkDotNet
  • provide two initial benchmarks based on DataPage:
    1. inserting keys to one page to one bucket to measure handling high collision rate and lookup speed later (31 keys with the same first nibble)
    2. inserting uniformly distributed keys in a number that generates 2 layers of DataPages (31 * 31 should be enough). Set & Get them

Please, do not provide a GitHub Actions workflow for them as GH Actions are not the best choice to run benchmarks, see

Based on this brief study, I do not recommend using the default GitHub Actions build agent pool for any kind of performance comparisons across multiple builds: such results can not be trusted in the general case. If you want to get a reliable set of performance tests, it’s better to have a dedicated pool of physical build agents with a unified hardware/software configuration using carefully prepared OS images.

from Andrey Akinshin blog post

Introduce test coverage reporting

Introduce test coverage reporting. The reporting should provide the following:

  1. It should be runable locally
  2. It should be run on each main build
  3. It should be manually runnable for each PR
  4. If run in a PR, should report results in a way that it's easy to read (PR comment?)
  5. If run on main should allow linking to the results
  6. It would be great if there was a way to compare the results with some previous runs

See: dotnet-coverage

Storage space multiplication

An important aspect of storage is that many nodes in storage are shared between multiple accounts. Massive savings can come from storing them only once, based on hash. As currently Paprika stores everything using by-path approach, it should be measured and considered:

  1. its overhead and, if one is noticed, how much overhead it is
  2. how to address it

Memory ownership destroyed for unknown reason

TransformExtension methods for some reason destroys the memory ownership and requires copying to a locally stackallocked Span. This should not be the biggest burden, but still, solving the mystery would be appreciated. The method that misbihaves is this one

private static DeleteStatus TransformExtension(in Key childKey, ICommit commit, in Key key, in Node.Extension ext)
{
using var childOwner = commit.Get(childKey);
// TODO: this should be not needed but for some reason the ownership of the owner breaks memory safety here
Span<byte> copy = stackalloc byte[childOwner.Span.Length];
childOwner.Span.CopyTo(copy);
Node.ReadFrom(copy, out var childType, out var childLeaf, out var childExt, out _);
if (childType == Node.Type.Extension)
{
// it's E->E, merge extensions into a single extension with concatenated path
commit.DeleteKey(childKey);
commit.SetExtension(key,
ext.Path.Append(childExt.Path, stackalloc byte[NibblePath.FullKeccakByteLength]));
return DeleteStatus.NodeTypePreserved;
}
// it's E->L, merge them into a leaf
commit.DeleteKey(childKey);
commit.SetLeaf(key,
ext.Path.Append(childLeaf.Path, stackalloc byte[NibblePath.FullKeccakByteLength]));
return DeleteStatus.ExtensionToLeaf;
}

Depending on the run it can throw:

  1. NullReferenceException
  2. Break CLR by pointing to some valid memory region but failing of dispose dispatch

Consider bigger DataPages fanout

Potentially, by having a bigger DataPages fanout, Paprika could write a smaller number of pages. The upper layers of the tree are quite easy to be filled. They also truncate only one nibble which makes no difference when writing NibblePath to a serialized form. Let's try the fanout of 256.

Should be done after #118

Keccak memoization

Enhance ComputeMerkleBehavior with a behavior that will allow to define which of the Keccaks are memoized. Please be minded that storing Keccaks does not impact the Merkle behavior as it preserves the correctness tracking paths. The only impact is performance.

One way to implement it would be to have a single integer that marks from which level the Keccaks of branches need to be memoized, like int memoizeKeccakFromTrieLevel. This would work nicely for both, Storage and State so it looks like no differentiation between these would be required. Maybe, we could start from int memoizeKeccakFromTrieLevel = 2? Also, caching every Nth level can be the option to make the data smaller.

Provide initial /docs directory

Provide initial /docs directory that would describe design choices and reasoning for decision behind Paprika design. It should be quite low level and encourage contributors to describe what they do. Also, it should be the goto place when questions are raised.

For now no fancy structure is required. Even a single Readme.md will do.

Reorganization handling

Related to #7 . How a reorganization should be handled? The transaction id should go up as usual, but the block number/hash should go to the one that is not reorganized away. Additionally, a potential marker should be stored so that the previous roots are not reorganized away.

Range queries for synchronization

Provide primitives needed for synchronization using Paprika. The most important one is a range query that can be executed over a readonly transaction. This query should provide:

  1. ranges accounts by their hash,
  2. ranges of storage items per account.

Probably the easiest implementation would be based on a cursor approach, which would allot to provide an IEnumerable or ref-based enumeration. The cursor implementation would be similar to LMBD stack approach, where for cursor serving they remember a vector over all the pages deep. The reason for that is that neither LMBD or Paprika implements B+ tree properly, meaning, that the pointer to the next page on the leaf level is missing. This has already been implemented in LMBD and we can follow the same path. So if you open a readonly batch for the given block, you can read as long as you need to providing a cursor like-behavior.

One could think of providing a next pointer in the page. This is against the COW design and breaks it terribly. It's discussed in depth in Howard Chu at this LMBD talk

Consider EIP-158 and its impact on the storage

EIP-158 introduces the state deleting behavior for empty accounts. It should be considered whether and how to introduce it in Paprika, and which responsibility will it be to have covered and implemented.

Additional remarks provided:

If you want to be able to archive sync with Paprika, you'll need to support both the case where empty accounts are deleted and the case where they aren't.

Also, for AuRa system transactions, eip158 is ignored for the system account regardless of eip158 being enabled or not. Which means that the system account (which is very likely to be empty) is still left in the state and not deleted

Page allocator and page management

Build in page allocator and free-page management to the metadata. Making sure that the reorg boundary is not breached, meaning that when reorg depth handling is set to N, it should be preserved and pages within N transactions should not be touched.

Make MemoryMappedDb properly create file and intialize it

Make MemoryMappedDb properly create file and zero first pages so that the initial roots are empty. Otherwise, it's up to the OS behavior. The algorithm should be simple: whenever a file is created for mmaped db it clears the first pages.

Change flushing mechanism on Windows

According to LMDB and other evidences, calling FlushViewOfAFile on Windows OS has terrible performance implications as it trashes the page cache. The way to make a flush on Windows is to use RandomAccess.Write over a SafeFileHandle that was opened in a synchronous, non-writethrough manner. If such a file is memmapped, writes to it will update memmaped region, hit the disk but won't cause a massive page eviction.

It requires benchmarking, especially as the number of sys calls can be much bigger. Also, try to combine multiple writes of adjacent pages. Also, scatter/gather IO is not an option here as it bypass the memmap and will result in consistency issues.

Keccak should compute over non-contiuous memory regions

Paprika has a copy of Keccak from Nethermind Keccak. Currently, Keccak implementation does not implement HashAlgorithm. The mentioned class provides two methods

  1. HashCore(ReadOnlySpan) and
  2. HashFinal()
    that can be used to construct a hash over a non-continuous memory. It'd be great if Keccak, even without implementing the actual HashAlgorithm allowed for handling the upgradability and consumption of sequences of memory.

Once provided, RLP encoding could benefit from it greatly.

Be minded, that it does not have to use ReadOnlySequence<byte> per se, but might provide some other, streaming API that could for example accept up to N spans by providing a span-like sequence or multiple overloads.

Add DataPage type with fanout of 256

As described in #6, Paprika would benefit from using pages with bigger fanout for top levels of the tree. Assuming 250 millions of accounts and the framing per regular DataPage at the current 31 frames per page (up to 62 with bigger density), level number zero and level number 1 should be a subject of the fanout. This would give 64k reduction on first two levels, leaving 3k for the next levels.
This should result in max 4 levels of search for the account.

Merkle storage handling

Make the Merkle construct work for both, Storage and State. This will require some amendments in the current way it works:

  1. Make ICommit aware of the context it runs with. The Merkle should never know whether an account should be added to the key or not. Just make it work for a general Merkle tree with some keys added or deleted.
  2. Introduce a split in Block so that Merkleization for Storage can be run first and for each account separately. Then for the State
  3. Reconsider using RLP or make a change so that the Merkleization can be agnostic of what it merkleizes
  4. Provide tests for the root

Build in storage support for accounts

The storage approach needs to be designed and implemented. It might require different kind of pages as framing for accounts will be much different. The sketched approach is to have a tuple of three (UInt256 key,UInt256 value, PageAddress next) and provide a fanout scenario similar to the one used for the accounts. The modified approach though, would be to allow the frames to reside on the same page as the account data. This would require even more fine-grained pooling (#23)

Please do update docs according to the design.

Proof support

Paprika should support proofs, both accountProof and storageProof. Each proof is an Array of rlp-serialized MerkleTree-Nodes with different roots (either the state root or a storage root). See https://docs.alchemy.com/reference/eth-getproof for more information.

This can be addressed by a proper implementation of #114 as it will require the very same computation for nodes that do not have their Keccaks memoized. This can be done in Parallel as all the nodes should be able to have their Keccaks computed on the basis of their content only.

Provide methods to handle SELFDESTRUCT

The SELFDESTRUCT code requires an ability to destroy any account with its underlying storage tree in one call like:

IWorldState.Destroy(anAccountToBeDestroyed)

This should be implemented in a way that is aligned with the write-trough behavior of Paprika's tree in PageDb so that the information if flushed down not necessarily immediately which could result in a lot of writes. If a page contains only storage information for the given account, it could be easily batch.RegisterForFutureReuse() so maybe it's possible to do a deep walk through the tree from the given level and find all the matches and write just the top page.

Refactor RootPage to just point to DataPage

Currently, RootPage component provides an initial fanout for DataPages. This means that if there's a value with an empty path (see: the root of the Merkle tree) it will have nowhere to go and probably will break the database. By making the RootPage just point to the first DataPage, the whole situation can be handled in one place, see: DataPage. It also makes the DataPage responsible for handling fanout etc.

Introduce StoragePage

Introduce a StoragePage that would be put in place once where data that needs to be pushed down, contain only a single account. In that case, all the paths should be truncated to 0 and only additionalKeys of FixedMap.Key should be used. Otherwise, the tree can be suboptimal, exctracting just one nibble from the address of the account.

Better page packing

Provide capability to pack data into a page in a better way. Use the page approach where the slice of data responsible for storing kv pairs grows from two directions to the center:

  1. id entries, grow ➡, from 0 to the center and capture basic lookup information
  2. payload, grow ⬅ , from the end to the center and capture payload

Additional materials:

  1. PostgreSQL: bufpage.c
  2. PostgresSQL docs
  3. Andy Pavlo lecture: 03 - Database Storage 1 (CMU Intro to Database Systems / Fall 2022

This implementation might not necessarily deal with all the aspects, but should be similar to the slotted pages. The indexing aspect might left as is now.

Metadata pages and the log

Metadata pages are responsible for holding:

  1. transaction number (always increasing)
  2. block hash
  3. block number

On a reorg scenario, block hash and block number can go back, but the transaction number will always be increasing. In case of a reorg an additional information about the block should be kept so that it's data are not overwritten.

ITreeVisitor

Provide a way to accept Nethermind's ITreeVisitor. It should take into consideration that some nodes don't have their' Keccaks stored (see: #77) and potentially a precommit hook of Merkle should be run on them to get the value. As the tree visiting is Merkle specific, it should stick to the Merkle feature.

Additionally, consider the fact that the RootVisitor is used for checking the synchronization status.

Free page management

When running tests, we found that there's an unhealthy amount of pages that are not reused. It could be that they are not updated so therefore they are there forever. At least some of them are probably wrongly not reused, taking into consideration the algorithm that is used for storing the abandoned pages. Currently, the algorithm promotes the pages that were not used in the given batch so that they are not reused in the next one, but are considered to be reused after N blocks.

To test it one could:

  1. set history depth to 2
  2. write one block with 1 million entries (an allocation of new pages)
  3. update all the entries from the previous block in another one (all are COWed, so should be considered for a reuse)
  4. start writing small blocks so that the previously COWed pages are reused properly without updating them to the last batch

The comment in the code about the page reuse:

// The current approach is to squash sets of _abandoned and _unused into one set.
// The disadvantage is that the _unused will get their AbandonedAt bumped to the recent one.
// This meant that that they will not be reused sooner.
// The advantage is that usually, there number of abandoned pages create is lower and
// the book keeping is simpler.
// The pages are put in a linked list, when first -> ... -> last -> NULL

Results of the run with percentiles

Image

Support branching NewPayloads in Engine API

In Engine API we can get branching payloads.
image
https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#engine_newpayloadv1

So we can get NewPayloads from different branches. We need to be able to respond Valid/Invalid from canonical branch. We can respond Accepted from non-canonical branch. Canonical branch is an arbitrary one that is on top of the head designated by the FCU.

Currently Nethermind responds Valid/Invalid also from non-canonical branches as it can keep multiple roots on same block level, it is simpler and allows CL to work optimally.

Consider if this requirements can be met with paprika. If not we will need to switch to accepted for non-canonical branches.

If we will use to Accepted, we need to process blocks in FCU to validate them properly.

Merkleization

Implement Merkleization for the tree

  • precommit hook for injecting Merkleization #96
  • serialization of Branch, Leaf and Extension to a Span #98
  • calculation of Keccaks for the nodes #104
  • capturing all keys inserted in a block and passing them as the precommit to the Merkleization #111
  • constructing the Merkle root hash by walking the keys from the above #113
  • tracking the dirty status for Merkle tree nodes #113
  • deletes #113
  • Keccak memoization
  • storage handling (separate storage in block and bottom up calculation)
  • squashing into PagedDb

The last 3 will be addressed in separate issues.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.