Giter Site home page Giter Site logo

ethereum / consensus-specs Goto Github PK

View Code? Open in Web Editor NEW
3.5K 240.0 935.0 22.04 MB

Ethereum Proof-of-Stake Consensus Specifications

License: Creative Commons Zero v1.0 Universal

Makefile 0.59% Python 97.57% Solidity 1.56% Nix 0.03% Dockerfile 0.04% Shell 0.20%

consensus-specs's Introduction

Ethereum Proof-of-Stake Consensus Specifications

Join the chat at https://discord.gg/qGpsxSA

To learn more about proof-of-stake and sharding, see the PoS documentation, sharding documentation and the research compendium.

This repository hosts the current Ethereum proof-of-stake specifications. Discussions about design rationale and proposed changes can be brought up and discussed as issues. Solidified, agreed-upon changes to the spec can be made through pull requests.

Specs

GitHub release PyPI version

Core specifications for Ethereum proof-of-stake clients can be found in specs. These are divided into features. Features are researched and developed in parallel, and then consolidated into sequential upgrades when ready.

Stable Specifications

Seq. Code Name Fork Epoch Specs
0 Phase0 0
1 Altair 74240
2 Bellatrix
("The Merge")
144896
3 Capella 194048
4 Deneb 269568

In-development Specifications

Code Name or Topic Specs Notes
Electra
Sharding (outdated)
Custody Game (outdated) Dependent on sharding
Data Availability Sampling (outdated)

Accompanying documents can be found in specs and include:

Additional specifications for client implementers

Additional specifications and standards outside of requisite client functionality can be found in the following repos:

Design goals

The following are the broad design goals for the Ethereum proof-of-stake consensus specifications:

  • to minimize complexity, even at the cost of some losses in efficiency
  • to remain live through major network partitions and when very large portions of nodes go offline
  • to select all components such that they are either quantum secure or can be easily swapped out for quantum secure counterparts when available
  • to utilize crypto and design techniques that allow for a large participation of validators in total and per unit time
  • to allow for a typical consumer laptop with O(C) resources to process/validate O(1) shards (including any system level validation such as the beacon chain)

Useful external resources

For spec contributors

Documentation on the different components used during spec writing can be found here:

Online viewer of the latest release (latest master branch)

Ethereum Consensus Specs

Consensus spec tests

Conformance tests built from the executable python spec are available in the Ethereum Proof-of-Stake Consensus Spec Tests repo. Compressed tarballs are available in releases.

consensus-specs's People

Contributors

adiasg avatar agemanning avatar arnetheduck avatar asn-d6 avatar axic avatar benjaminion avatar carlbeek avatar dankrad avatar dapplion avatar djrtwo avatar ericsson49 avatar etan-status avatar fradamt avatar hwwhww avatar inphi avatar json avatar jtraglia avatar justindrake avatar kevaundray avatar lsankar4033 avatar mkalinin avatar mrchico avatar nashatyrev avatar paulhauner avatar potuz avatar protolambda avatar ralexstokes avatar terencechain avatar vbuterin avatar zilm13 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

consensus-specs's Issues

Extend SSZ hash type

Proposal

Replace SSZ hash type family with the type that has arbitrary length, something like hash[32..128].
EDIT:
Consider any hashN type, where N is positive integer, a valid hash type if N satisfies condition:
N % 8 == 0

Rationale

Using strict number of hash types has no forward compatibility. If there will be a need in a new e.g. hash128 type then implementers will have to update their code that runs hash type validation or use bytes type as a workaround.

ancestor_hashes - description in comment

The BeaconBlock.ancestor_hashes field has the following comment:

i'th item is the most recent ancestor that is a multiple of 2**i for i = 0, ..., 31

Consider a block, A, that is at slot 2**31 + 1 for which we're are trying to compute ancestor_hashes. It's immediate ancestor, block B, is at slot 2**31, so therefore it should occupy ancestor_hashes[31] because 2**31 % 2**31 = 0. Block B should also occupy ancestor_hashes[30] because 2**31 % 2**30 = 0. This continues all the way down until 2**31 % 2**0 = 0. Ultimately, ancestor_hashes will be completely filled with the hash of B.

I suspect this should not be the case?

I went back in git history and saw it has been changed by the venerable @rawfalafel in #81. Maybe he could shed some light?

Wonky calculation in WITHDRAWAL_PERIOD

WITHDRAWAL_PERIOD - number of slots between a validator exit and the validator slot being withdrawable. Currently set to 2**19 = 524288 slots, or 2**23 seconds ~= 97 days.

2**19 slots is 2**22 seconds ~= 48.5 days.

Forgotten Password or Private Key Partial Solution

On creation of a new account, an optional secondary account without the need of a key can be created with which a smart contract is attached. In the case of a forgotten password or lost key, the contract can restrict the usage of the secondary account so much that it would not be worth the trouble of using if it is not yours. An example would be to list two accounts that ether can be sent to and a very low maximum usage per week. This is to keep funds from being completely lost and all this can only be determined on creation of the account and not in the aftermath including whether there is a secondary account or not. To keep anybody from transferring the funds of other accounts into other secondary accounts, if someone is in possession of their own private key and password, they can return those funds back to the primary account and send those funds all in one easy transaction.

Describe genesis

Issue

We don't plan on having a genesis block but rather just a genesis active and crystallized state which the first block will use for its state transition. This means that the 0th block in the chain might be a block from any slot at or after the slot at GENESIS_TIME.

Although this is not a "genesis block" in the sense that it is entirely predefined, it does need some exceptions to be properly handled in the current framework

  • the 0th block will have no parent so the entirety of ancestor_hashes will be 0x00
  • If ancestor_hashes[0] == 0x00 then there is no parent. In this case, do not allow any attestations to be included in the block and do not impose the parent proposer attestation requirement on the block.
  • There is no parent so get_new_recent_block_hashes does not have the appropriate params and should be called with get_new_recent_block_hashes(old_block_hashes, -1, current_slot, 0x00)
  • update_ancestor_hashes should not be called

There might be a couple of other things. Just dumping this here and will make a more detailed PR soon.

Add a Notation section

Although the Python reference implementation is not supposed to be considered normative whenever it conflicts with the contents of the phase 0 spec, many of the algorithm definitions in the spec are presented via Python code. I think it might be helpful for new or novice readers to have a brief explanation that such code is to be interpreted merely as definitions of algorithms rather than as code that must be used verbatim in implementations.

I will create a PR with my proposed text additions.

sha3/blake mismatch in deposit merkle tree

Issue

We are building the receipt_tree in the pow deposit contract with sha3, but then are attempting to verify the merkle branch in the beacon chain with hash which is defined as blake2b. This won't work! :)

Proposed Solution

Either:

  • require sha3 (actually keccak256) in beacon chain implementations
  • deploy blake2b as a precompile on existing pow chain

LOGOUT message versioning for replay protection

LOGOUT special records need added versioning to handle replay protection in a similar way to Attestations. This version needs to be a component of the signed message.

A simple solution would be to:

  • add a data field for version
  • ensure that the sig is from the correct validator and of the message hash(LOGOUT_MESSAGE + version)

We might want to consider what a more general solution to handle versioning across attestations and specialrecords (that require it) might look like

Incentive for running beacon nodes

A super dumbed down way of thinking about beacon nodes vs validator clients is geth and geth attach.

In that regard, a beacon node (which will bear the bulk of work in the beacon chain) will be passing data onto validators who do their magic and pass it back. The incentive for running validators (which can "attach" to a local beacon node, or a hosted one) is clear - stake is in the system, and "block rewards" are added to a validator's balance (though I think it should be a separate balance, not to increase the vote and power of a validator the longer they are in the system). But what is the incentive for running beacon nodes if they can run without validators and (anyone's) validators can just connect to them?

@JustinDrake mentioned something curious in his talk at Devcon - that Infura would be running beacon nodes (looks like maybe it wasn't him? Anyone remember who it was?). I understand from my discussion with @djrtwo that he may have meant superfull archive nodes that can serve rent-expired restoration purposes and also sync all shards all the time, and validators would then be connecting to such nodes, but regardless of the type of node that was implied there for beacon nodes, I don't see the incentive for running beacon nodes + validators as opposed to just validators and hooking onto someone else's beacon node. I see this as a replay of the problem we have now - too few people running full nodes with LES slots for light clients to attach to.

So I'd like to open up a discussion on beacon node incentivization. Am I missing something, has this been discussed somewhere? If so, I would appreciate if someone could point me to the relevant discussion.

References:

`CHUNK_SIZE` setting

Per #180 (comment), currently, SSZ_CHUNK_SIZE is set to 128 bytes in SSZ spec while another CHUNK_SIZE constant is set to 256 bytes in phase 1 spec. IMO it seems logical to unify the chunk size.

@JustinDrake and co, do we have any concern, like the size of proof of custody regarding chunk size?

Proposal to use SSZ for consensus only

Instead of using ssz for both network and consensus, I think we should use ssz only for consensus and standardize around protobuf at the network layer.

Benefits

  • Eliminates the ssz decoder since there's no need for the encoding to be reversible.
  • Eliminates the need for a length prefix in the ssz encoder. Same reason as above.
  • Allows the ssz encoder to be space inefficient (e.g. padding) since the result doesn't need to be stored or sent over the wire.
  • protobuf is already battle-tested and fits this use case well.
  • protobuf supports schemas and versioning out of the box.
  • protobuf already has solid libraries for many languages.
  • protobuf's code generation provides good ergonomics for developers.

Potential conerns

  • Supporting two serializers increases complexity
    • In my opinion, inventing a serializer that satisfies the needs of both consensus and networking is much more complex.
    • Using protobuf for networking is way simpler. This is definitely true for Prysm, and I imagine this is true for other implementation teams as well, but I'd like to hear other team's opinions.
    • ssz becomes simpler if its only used for consensus.
  • Re-encoding a block from the wire is inefficient
    • The cost of re-encoding is negligible when compared to the other steps necessary for verification. The byte array needs to be decoded, the encoding needs to be hashed, and the signature needs to be verified.
  • Protobuf is bad because [insert opinion here]
    • I'm open to other libraries, but protobuf seems like a good fit for the reasons stated above.

Previous discussions

Proof of Possession and BLSVerify clarification

#17 added the add_validator routine and the first use of bls_proof_of_possession mentioned in at the beginning of the spec: PoW Main changes.

As mentionned on Gitter, either sorting the validator keys or using the bls_proof_of_possession are necessary to avoid rogue public keys attack where an attacker can claim that both he and Bob have signed the message but only Bob did.

Now regarding this part I'd like to confirm the following 2 things:

def add_validator(pubkey, proof_of_possession, withdrawal_shard,
                  withdrawal_address, randao_commitment, current_dynasty):
    # if following assert fails, validator induction failed
    # move on to next validator registration log
    assert BLSVerify(pub=pubkey,
                     msg=sha3(pubkey),
                     sig=proof_of_possession)
    ...
  1. Is sha3 the usual keccak256?

  2. Is BLSVerify standard ECDSA verify but with BLS curve?

    In Milagro that would be the following:

    /**	@brief ECDSA Signature Verification
    *
      IEEE-1363 ECDSA Signature Verification
      @param h is the hash type
      @param W the input public key
      @param M the input message
      @param c component of the input signature
      @param d component of the input signature
      @return 0 or an error code
    */
    extern int ECP_ZZZ_VP_DSA(int h,octet *W,octet *M,octet *c,octet *d);

    and if we take secp256k1

    /** Verify an ECDSA signature.
    *
    *  Returns: 1: correct signature
    *           0: incorrect or unparseable signature
    *  Args:    ctx:       a secp256k1 context object, initialized for verification.
    *  In:      sig:       the signature being verified (cannot be NULL)
    *           msg32:     the 32-byte message hash being verified (cannot be NULL)
    *           pubkey:    pointer to an initialized public key to verify with (cannot be NULL)
    *
    * To avoid accepting malleable signatures, only ECDSA signatures in lower-S
    * form are accepted.
    *
    * If you need to accept ECDSA signatures from sources that do not obey this
    * rule, apply secp256k1_ecdsa_signature_normalize to the signature prior to
    * validation, but be aware that doing so results in malleable signatures.
    *
    * For details, see the comments for that function.
    */
    SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ecdsa_verify(
        const secp256k1_context* ctx,
        const secp256k1_ecdsa_signature *sig,
        const unsigned char *msg32,
        const secp256k1_pubkey *pubkey
    ) SECP256K1_ARG_NONNULL(1) SECP256K1_ARG_NONNULL(2) SECP256K1_ARG_NONNULL(3) SECP256K1_ARG_NONNULL(4);

A proposal for chain initialization, main chain block inclusion, and deposit processing

On-chain contract

We create a deposit contract on the blockchain, with roughly the following code:

HashChainValue: event({prev_tip: bytes32, data: bytes[2048], value: wei_value, total_deposit_count: int128})
ChainStart: event({hash_chain_tip: bytes32, time: timestamp})

hash_chain_tip: public(bytes32)
total_deposit_count: int128

@payable
@public
def deposit(data: bytes[2048]):
    log.HashChainValue(self.hash_chain_tip, data, msg.value, self.total_deposit_count)
    self.total_deposit_count += 1
    self.hash_chain_tip = sha3(concat(self.hash_chain_tip, data, as_bytes32(msg.value), as_bytes32(self.total_deposit_count)))
    if self.total_deposit_count == 16384:
        log.ChainStart(self.hash_chain_tip, block.timestamp)

When a user wishes to move their ETH from the 1.0 chain to the 2.0 chain, they should call the deposit function, sending along 32 ETH and providing as data a SimpleSerialize'd object with the following arguments (in order):

  • pubkey: int256
  • proof_of_possession: [int256]
  • withdrawal_shard: int64
  • withdrawal_address: bytes20
  • randao_commitment: hash32

If they wish to deposit more than 32 ETH, they would need to make multiple calls.

[Governance note: when publishing the final version of this contract, it may be desirable to issue some kind of formal EIP that "enshrines" its privileged status, ąnd encourage CarbonVotes and other polls to vote on it, which if successful wouuld make clear to the community that this contract is "part of the protocol" and so the community is responsible for providing ongoing protocol improvements that eventually unlock any ETH sent into this contract]

Chain initialization

When a ChainStart log is published, this initializes the chain, setting the following parameters:

  • POW_CHAIN_HASH_ROOT (new parameter) = hash_chain_tip
  • GENESIS_TIME = time
  • PROCESSED_HASH_ROOT (new parameter) = hash_chain_tip

It runs on_startup with initial_validator_entries equal to the list of data records published as HashChainValue logs so far, in the order in which they were published (oldest to newest).

Chain updating

Define a validator's "view" as being the value obtained by calling DEPOSIT_CONTRACT_ADDRESS.get_hash_chain_tip() from the post-state of the block 512 blocks behind the current head of the PoW chain. Define a "valid view" (defined subjectively from the PoW of a validator) as a value which is a descendant of POW_CHAIN_HASH_ROOT and cannot be obtained by calling DEPOSIT_CONTRACT_ADDRESS.get_hash_chain_tip() from the post-state of a block that is part of the canonical PoW chain at least 512 blocks behind the head. Note that any valid view should be either equal to or an ancestor of the validator's view.

Blocks will have a new data field, hash_chain_tip_vote: hash32, which proposers are expected to fill with the following algorithm:

  • Let slot B be the last slot during which the POW_CHAIN_HASH_ROOT changed.
  • If all blocks since slot B contained a hash_chain_tip_vote that was either equal to the POW_CHAIN_HASH_ROOT or was an invalid view, vote the validator's view.
  • If there was at least one valid view published as a hash_chain_tip_vote since slot B, copy the first valid view.

Note that assuming >= 50% honest, this algorithm will converge to all honest proposers voting the same value, which is a descendant of the POW_CHAIN_HASH_ROOT. If the same value is voted for in >= 683 of the last 1024 blocks, set the POW_CHAIN_HASH_ROOT to this value.

Note that this is a vote, not a consensus rule; blocks with incorrect votes should not be rejected.

Deposit processing

Add a new type of SpecialObject, which consists of the entire hash-linked-list of HashChainValue logs since the previous PROCESSED_HASH_ROOT up to the POW_CHAIN_HASH_ROOT. If a valid such hash-linked-list is submitted, then we run add_validator with the given values for each record, and set PROCESSED_HASH_ROOT = POW_CHAIN_HASH_ROOT.

This does mean that deposit processing is not "automatic", in that deposits are not automatically read from the PoW chain; only a hash is automatically read, and the rest of the data must be manually imported by some block proposer. This is by design, to limit the amount of in-consensus communication between the PoW chain and the beacon chain required to a single hash value.

Alternate withdrawal mechanism (specified proposal)

See https://ethresear.ch/t/suggested-average-case-improvements-to-reduce-capital-costs-of-being-a-casper-validator/3844 for more theoretical discussion.

When a validator is removed from the active set via exit_validator(...), they are assigned a queue_id (there is an incrementing next_queue_id variable in the crystallized_state that keeps track of what ID to assign). During every state recalculation, the VALIDATOR_WITHDRAW_COUNT validators with the lowest queue_ids are withdrawn.

We set VALIDATOR_WITHDRAW_COUNT to 2^3 = 8. Rationale: if ~12m (3 * 2^22) ETH (or 2^17 validators) are trying to exit at the same time following an attack, it will take them 2^14 cycles (= 2^23 seconds ~= 3.3 months) to exit.

This mechanism serves a triple purpose (see the ethresear.ch thread above). First, it makes the withdrawal time longer if more validators are participating, and shorter if fewer validators are participating, reducing the long-run uncertainty in the number of validators, though at some cost to long-run certainty in the safe weak subjectivity duration (though a validator will know when they will have to come back online next at the time that they go offline).

Second (less important), it creates further anti-centralization pressure because larger deposits will on average have to wait longer than small deposits.

Third, and most important, it makes withdrawals potentially very quick in the average case where few validators are exiting, and slower in the case where more validators are exiting. This changes the philosophy of validating somewhat, from "you are definitely committing your ETH for a long time", to "you retain high liquidity much of the time, but you agree that in the event that an attack appears you may be conscripted and required to stick around for a few months longer than you may have expected".

AttestationRecord aggregate sig message serialization

In the following section:

Verify that aggregate_sig verifies using the group pubkey generated and hash(slot.to_bytes(8, 'big') + parent_hashes + shard_id + shard_block_hash + justified_slot.to_bytes(8, 'big')) as the message.

I think that shard_id should be shard_id.to_bytes(2, 'big') to be consistent with the other items in this list.

The PoC implementation serializes the shard_id in this manner.

https://github.com/ethereum/beacon_chain/blob/be4bff59da2bb5440dbe3ecd3338b61749c37ae0/beacon_chain/state/state_transition.py#L120

Per-block processing: min AttestationRecord count

A block can have 0 or more AttestationRecord objects

I understand it should have 1 or more AttestationRecord objects, considering the following:

Verify that an attestation from this validator is part of the first (ie. item 0 in the array) AttestationRecord object

Randao_update object for active state

We are currently modifying crystallized_state during block processing at the last step:
crystallized_state.validators[proposer_index].randao_commitment = block.randao_reveal

The correct way is to maintain a RandaoUpdate object in active_state, and during state recalculations update each validator's randao_commitment

Shard block vs. beacon block

I've noticed that there's an assumption that "block" means a beacon block and "shard block" means a shard block.

For example:

    # Block hashes not part of the current chain, oldest to newest
    'oblique_parent_hashes': ['hash32'],
    # Shard block hash being attested to
    'shard_block_hash': 'hash32',

I think it would be useful (particularly to new readers) to either specify "beacon block" or to formally establish the "block == beacon block" rule.

ValidatorRecord doesn't define proof_of_possession

The PoW ValidatorRegistration contract description on the spec states that BLS proof of possession will be the final argument to the register function of the contract, however the ValidatorRecord structure doesn't define this item?

A BLS proof_of_possession of types bytes is given as a final argument.

A ValidatorRecord has the following fields:

{
    # BLS public key
    'pubkey': 'uint256',
    # Withdrawal shard number
    'withdrawal_shard': 'uint16',
    # Withdrawal address
    'withdrawal_address': 'address',
    # RANDAO commitment
    'randao_commitment': 'hash32',
    # Slot the RANDAO commitment was last changed
    'randao_last_change': 'uint64',
    # Balance in Gwei
    'balance': 'uint64',
    # Status code
    'status': 'uint8',
    # Slot when validator exited (or 0)
    'exit_slot': 'uint64'
}

`get_shards_and_committees_for_slot` for future slots

Issue

get_shards_and_committees_for_slot only provides a shuffling for the previous cycle and the next cycle (since the last state recalc). If a cycle's worth of proposers don't show up, and the next block proposed is in the following cycle, then this function cannot be used to get the committees and proposers.

This function needs to provide valid committee/proposer for slots of arbitrary distance into the future.

Proposed solution

def get_shards_and_committees_for_slot(crystallized_state: CrystallizedState,
                                       slot: int) -> List[ShardAndCommittee]:
    earliest_slot_in_array = crystallized_state.last_state_recalculation - CYCLE_LENGTH
    assert earliest_slot_in_array <= slot
    if slot < crystallized_state.last_state_recalculation:
        index = slot - earliest_slot_in_array
    else:
        index = CYCL_LENGTH + (slot % CYCLE_LENGTH)
    return crystallized_state.shard_and_committee_for_slots[index]

The else statement index allows for slots arbitrarily in the future to be mapped to the second half of the crystallized_state.shard_and_committee_for_slots array.

Handle registrations with bad proofs of possession

At the moment validators with a bad proof of possession are asserted away in the add_validator function. One possible way to handle bad proofs of possession is to add a corresponding validator entry with status PENDING_WITHDRAW.

Understanding the `get_new_shuffling()` function

For the past several weeks, as part of my hackternship, I have been trying to understand the get_new_shuffling() function. In its current iteration, the get_new_shuffling() function is in charge of sorting validators into committees and associating a committee to a particular shard for a particular cycle and a slot in that cycle. The shard in question is one that has a crosslink from the committee that was given by get_new_shuffling().

Is the above right?

My main question is what is the motivation for selecting committees in this way?

Max length of specials

Issue

Block.specials currently has no max length. We also don't check the validity of SpecialRecords until doing state recalcs. This can lead to a dos vector in which Blocks are filled with junk specials causing an increased load on block size as well as state recalc time.

Proposed solution

  • Add constant MAX_SPECIALS_PER_BLOCK such that block validity requires len(block.specials) <= MAX_SPECIALS_PER_BLOCK
  • Check SpecialRecord validity (signature, format, etc) in block processing after checking attestation validity. Throw out block if any invalid specials
  • During state recalc, ActiveState.pending_specials are valid so just process the related state transitions rather than the signatures at this point.

Miscellaneous comments

Nitpicks/readability

  1. "PoW main chain" => PoW chain
  2. Run a spell checker (e.g. "rocess" => process, "parents block" => parent block)
  3. Be consistent with American vs British English. E.g. "grey" => "gray"
  4. Sometimes code formatting is missing. For example "For every shard S" => "For every shard S", "every CYCLE_LENGTH blocks" => "every CYCLE_LENGTH blocks", "into N pieces" => "into N pieces"
  5. Be consistent with variable names. E.g. both withdrawal_addr and withdrawal_address are used (prefer the later), as well as bls_proof_of_possession and proof_of_possession.
  6. Lint all the code for formatting. For example there are spacing inconsistencies status=0 (no spaces) vs pubkey = pubkey (with spaces).
  7. Note PoW chain contract does no validation on the inputs (in particular, no validation of bls_proof_of_possession.
  8. De-emphasise the comment about prioritize(block_hash, value). It's a side note (not strictly part of the Ethereum 2.0 spec) and is confusing at the very top.
  9. Clean up the status states. Suggest: "pending log in", "logged in", "pending log out", "logged out", "exited"
  10. Clearly label the slashing conditions as such. Searching for "slashing" only has hits in the TODO.
  11. Consider adding comments in the code. For example meaty functions shuffle, get_new_shuffling, change_validators have no comments.
  12. Consider replacing the half-words-half-code line-by-line assignments in "On startup" by a real on_startup function.
  13. Same as above for "For each one of these attestations", "Balance recalculations related to FFG rewards", min_empty_validator, etc.
  14. What does "[TODO]" mean after "For each one of these attestations" refer to?
  15. Replace instances of "PoS chain" with "beacon chain" for consistency.
  16. For the fork choice rule, clearly state it (in the notation of the spec). Deemphasize the ethresear.ch post.
  17. Try to find a better name for penalized_in_wp, shard_and_committee_for_slots.
  18. Avoid abbreviations in variable names (most variable names are ok!). For example last_state_recalc => last_state_recalculation.
  19. Avoid numerical constants outside of the "Constants" section (e.g. "1024 shards").
  20. Embrace the (now accepted and commonly used) word "Shasper". Better than the awkward (and inconsistent) phrasings in the spec: "PoS/sharding", "Casper+Sharding", "Casper/sharding"
  21. Is the diagram "Shuffle -> Split by height -> Split by shard" still accurate? I've always been a bit confused by the different committee sizes at the end.
  22. Some internal variable names are confusing, e.g. ifh_start, avs, sback, cp. Avoid one-letter variables (counters like i, j, k are fine, but things like o, m, d are harder to follow).
  23. Clearly state the global clock assumption, and other relevant assumptions.
  24. Be consistent between slot (e.g. in CrosslinkRecord) and slot_number (e.g. in beacon chain block).

Content

  1. Clearly mention that the hash function based on BLAKE2b-512 is a placeholder that should be replaced by a STARK-friendly hash function for production.
  2. Add (possibly in TODO) the double-batched Merkle accumulator for beacon chain blocks.
  3. Add (possibly in TODO) the hardening of RANDAO against orphaned reveals (ethresear.ch post coming). Also flesh out the RANDAO part of the spec.
  4. Replace sha3(pubkey) with hash(pubkey). Same for sha3("bye bye").
  5. Should the state-transition be per-slot, not per-block? If there's a skip slot (e.g. the beacon proposer does not show up) recent_block_hashes should still be updated, right?
  6. What are the rules for Merklelising the crystallized state? I would avoid a plain hash as this would not be friendly to light clients.

justified_block_hash.slot != justified_slot

With reference to the following:

Verify that the justified_slot and justified_block_hash given are in the chain and are equal to or earlier than the last_justified_slot in the crystallized state.

Is it intended that the block identified by justified_block_hash has a slot_number that does not equal justified_slot? In other words, can an AttestationRecord point to a justified_slot which contains a block other than justified_block_hash?

With the present wording in the spec, I believe that is possible.

Thanks!

`validator.randao_last_change` is never updated

Two issues with respect to RANDAO commitment handling:

  • randao_last_change is set when a validator is inducted, but is not changed when the validator's commitment is updated in response to the RANDAO_REVEAL special.

  • A variable V.randao_last_reveal appears but is not defined. Suspect a typo for V.randao_last_change.

Separate Single Attestation into its Own Wire Format

Background:

AttestationRecord is currently used for aggregated attestation and single attestation when attester attest on a block or when proposer include its own attestation when it broadcasts the block at the network layer.

Proposal:

I wonder if it's worth it to separate out single attestation into its own object which looks below.
As you can see, we can save the network bandwidth on oblique_parent_hashes (don't need for single attestation) and attester_bitfield (bytes -> uint32) per attestation every time proposer or attester transmits over the wire. The down side is there's increasing complexity to manage two different attestation objects.

SingleAttestation:

fields = {
    'slot': 'int64',
    'shard_id': 'int16',
    'shard_block_hash': 'hash32',
    'attester_index': 'int32',
    'justified_slot': 'int64',
    'justified_block_hash': 'hash32',
    'aggregate_sig': ['int256']
}

AttestationRecord (AggregatedAttestation):

fields = {
    'slot': 'int64',
    'shard_id': 'int16',
    'oblique_parent_hashes': ['hash32'],
    'shard_block_hash': 'hash32',
    'attester_bitfield': 'bytes',
    'justified_slot': 'int64',
    'justified_block_hash': 'hash32',
    'aggregate_sig': ['int256']
}

Misc questions

Questions

  • 1) Should there be a check in the queuing logic that the number of validators is less than MAX_VALIDATOR_COUNT?
  • 2) Should slot and dynasty be set to int32 (as opposed to int64)? With 8 second slot durations 32 bits gives us 1000+ years. (Same for justified_streak, justified_slot, etc.)
  • 3) Is pubkey being a int256 too small for BLS12-381?
  • 4) What happens if a shard block gets sufficient attestations to become a crosslink, but that shard block conflicts with an existing crosslink in the beacon chain?
  • 5) What calculation led to "~39.4%" in "the amount of time it takes for the quadratic leak to cut deposits of non-participating validators by ~39.4%"? Consider detailing with the constants.
  • 6) What calculation led to "~3.88%" in "~3.88% annual interest assuming 10 million participating ETH"? Consider details with the constants.
  • 7) Should we allow deposits greater than DEPOSIT_SIZE? What about top-ups and penalties for being under the DEPOSIT_SIZE?
  • 8) What is the maximum bias to the global clock that we are assuming honest validators to have?
  • 9) Is it still possible that a dynasty transition can happen within a cycle?
  • 10) What happens if a validator deregisters and then wants to re-register?
  • 11) Do we need to define a SpecialObject to change the withdrawal_shard_id, withdrawal_address or randao_commitment?
  • 12) Should we deprecate Wei with 64-bit balances? Indeed, 64 bits of granualarity is enough for balances, and is consistent with a 64-bit EVM2.0?

Nitpicks

  • 1) Add "type codes" for the possible SpecialObject types to clean up obj.type == 0 and obj.type == 1.
  • 2) Consider adding a table of contents after the introduction for the top-level sections.

BLS12-381: Clarification between G1/G2 and serialisation

@vbuterin @JustinDrake thank you for the new spec regarding BLS12-381.

Comparison with ethresear.ch post

I've compared it with the mini-spec from https://ethresear.ch/t/pragmatic-signature-aggregation-with-bls/2105/31.

I noticed the following differences:

2018-11-28_14-20-03

As of this version of the specs we have G1 48 bytes and G2 96 bytes (like Zcash) while the ethresear.ch post (and Chia Network) is using the G1 96 bytes and G2 48 bytes.

I.e. Are the changes intentional?

Serialization

Internally many libraries are using a custom binary representation for bigint for crypto to avoid dealing with carry, for example in Milagro

4 Handling BIG Numbers
4.1 Representation
One of the major design decisions is how to represent the 256-bit field elements required for the elliptic curve and pairing-based cryptography. Here there are two different approaches.

One is to pack the bits as tightly as possible into computer words. For example on a 64-bit computer 256-bit numbers can be stored in just 4 words. However to manipulate numbers in this form, even for simple addition, requires handling of carry bits if overflow is to be avoided, and a high-level language does not have direct access to carry flags. It is possible to emulate the flags, but this would be inefficient. In fact this approach is only really suitable for an assembly language implementation.

The alternative idea is to use extra words for the representation, and then try to offset the additional cost by taking full advantage of the "spare" bits in every word. This idea follows a "corner of the literature" which has been promoted by Bernstein and his collaborators in several publications.

or BearSSL is using i15/i31 limbs (int16 and int32 with spare bits):

Elliptic Curves
BearSSL includes several implementations of elliptic curves. Some use the same generic big integer functions as RSA (“i15” and “i31” code), and thus inherit their constant-time characteristics; other include specialised code which is made faster by exploiting the special format of the involved field modulus.

Some points are worth mentioning, for the implementations of NIST curves:

  • ...

  • ECDSA signature verification entails computing aG+bQ where a and b are two integers (modulo the curve order), G is the conventional generator, and Q is the public key. Classic implementations mutualise the doublings in the two double-and-add instances; however, this implies a larger table for window optimisation: if using 2-bit windows, then the aggregate table must have all combinations of G and Q with multipliers up to 3, so we are in for at least 13 extra values (apart from G and Q themselves). Each such point uses 216 bytes (three coordinates large enough for the P-521 curve, over 31-bit words with an extra “bit length” word) so such a window would use up almost 3 kB of stack space. We cannot afford that within BearSSL goals.

So we need to define a canonical serialisation that is used during communication.

If understood correctly the serialisation format is defined by

Specifically, a point in G1 as a 384-bit integer z, which we decompose into:

- x = z % 2**381
- highflag = z // 2**382
- lowflag = (z % 2**382) // 2**381

which is just the natural way to extend uint32 / uint64 to uint384.

Is the following visualisation correct? I assume big endian, so most significant bit on the left.

 384     381            192              0
  +-------+--------------+---------------+
  | 0 0 0 |    high      |      low      |
  +-------+--------------+---------------+

   unused <--------------x--------------->

Add "design goals" to the README?

There is much chat about the "design goals" of ETH2.0.

Is it worthwhile to add a section to the README that lists the design goals for this project?

I think this could be helpful in terms of communicating to the broader community about the context around ETH2.0.

A proposal for shard blocks (rough draft / outline)

EDIT 2018.11.09: this is now a pull request. #123

Constants

  • CHUNK_SIZE: 256 bytes

Block structure and validation

A ShardBlock object has the following fields:

{
    # Slot number
    'slot': 'uint64',
    # Parent block hash
    'parent_hash': 'hash32',
    # Beacon chain block
    'beacon_chain_ref': 'hash32',
    # Depth of the Merkle tree
    'data_tree_depth': 'uint8',
    # Merkle root of data
    'data_root': 'hash32'
    # State root (placeholder for now)
    'state_root': 'hash32',
    # Attestation (including block signature)
    'attester_bitfield': 'bytes',
    'aggregate_sig': ['uint256'],
}

To validate a block on shard shard_id, compute as follows:

  • Verify that beacon_chain_ref is the hash of the slot'th block in the beacon chain.
  • Let cs be the crystallized state of the beacon chain block referred to by beacon_chain_ref. Let validators be [validators[i] for i in cs.current_persistent_committees[shard_id]].
  • Assert len(attester_bitfield) == ceil_div8(len(validators))
  • Let curblock_proposer_index = hash(cs.randao_mix + bytes8(shard_id)) % len(validators). Let parent_proposer_index be the same value calculated for the parent block.
  • Make sure that the parent_proposer_index'th bit in the attester_bitfield is set.
  • Generate the group public key by adding the public keys of all the validators for whom the corresponding position in the bitfield is set to 1. Verify the aggregate_sig using this as the pubkey and the parent_hash as the message.
  • Verify the data (see below)

Note that we expect blocks to be broadcasted along with the signature from the curblock_proposer_index'th validator in the validator set for that block.

Data root

  • Let data_size = calc_block_maxbytes(cs) (function TBD; think of it as a function that returns values between ~1024 and ~100k, always multiples of CHUNK_SIZE)
  • Verify that 2**data_tree_depth = next_power_of_2(data_size // CHUNK_SIZE)
  • Verify the availability of the data in the tree as a tree of depth data_tree_depth
  • Verify that all data after data_size bytes (ie. starting from chunk data_size // CHUNK_SIZE) is all set to zero bytes

Fork choice rule

The fork choice rule is a two-part fork choice rule. First, use as a root the block referred to in the most recent successful crosslink for that shard in the beacon chain's fork choice. To find the head from there, use LMD GHOST, ie. given a choice between two child blocks of a given parent, choose the block that has more most-recent (ie. highest-slot-number) attestations (including attestations of its descendants) from the validators that are active in the most recent validator set.

Modifications to crosslinks

Crosslinks also commit to the "combined data root", a Merkle root of the data roots of all blocks since the last crosslink created using the following process:

def get_zeroroot_at_depth(n):
    o = b'\x00' * CHUNK_SIZE
    for i in range(n):
        o = hash(o + o)
    return o

def mk_combined_data_root(depths, roots):
    default_value = get_zeroroot_at_depth(max(depths))
    data = [default_value for _ in range(next_power_of_2(len(roots)))]
    for i, (depth, root) in enumerate(zip(depths, roots)):
        value = root
        for j in range(depth, max(depths)):
            value = hash(value, get_zeroroot_at_depth(depth + j))
        data[j] = value
    return compute_merkle_root(data)

Notice that this is equivalent to padding each block's data up to the power of two equal to or above the largest block's data, then concatenating the data, then padding the result with zero bytes to the next power of two, and taking the Merkle root of the result.

Crosslink participants should only sign data roots if the data roots match up with the result of this calculation.

Crosslink records also commit to the proof of custody computation of this data.

When to add new validators via `add_validator`

Issue

It is not currently clearly defined when we should call add_validator to process new PoW logs other than at the beginning to process the initial validator set.

Proposed Implementation

Call add_validators_from_pow_logs right before calling change_validators during a validator change. Do this before activating/withdrawing in an effort to optimistically induct new validators from the pow chain if possible.

validator_logs includes all deposit logs from the PoW chain up until the pow_chain_reference of the current block.

def add_validators_from_pow_logs(validators: List[ValidatorRecord],
                                 validator_logs: List[ValidatorLog],
                                 current_slot: int) -> None:
    for validator_log in validator_logs:
        add_validator(
            validators=validators,
            pubkey=validator_log.pubkey,
            proof_of_possession=validator_log.proof_of_possession,
            withdrawal_shard=validator_log.withdrawal_shard,
            withdrawal_address=validator_log.withdrawal_address,
            randao_commitment=validator_log.randao_commitment,
            current_slot=current_slot
        )

BLS Proof of Possession Explanations

Can we include a short section on BLS proof of possession? Manly how do we generate? What's the standard formula? I think this needs to be a different hash function which beacon chain shall support

Tree hash functions for SSZ

The following is a general-purpose strategy for making all data structures in the beacon chain more light client friendly. When (i) hashing the beacon chain active state, (ii) hashing the beacon chain crystallized state, or (iii) hashing beacon chain blocks, we instead use the following hash function specific to SSZ objects, where hash(x) is some underlying hash function with a 32-byte output (eg. blake(x)[0:32])

def hash_ssz_object(obj):
    if isinstance(obj, list):
        objhashes = [hash_ssz_object(o) for o in obj]
        return merkle_root(objhashes)
    elif not isinstance(obj, SSZObject):
        return hash(obj)
    else:
        o = b''
        for f in obj.fields:
            val = getattr(obj, f)
            o += hash_ssz_object(val)
        return hash(o)    

Where merkle_root is defined as follows:

def merkle_root(objs):
    min_pow_of_2 = 1
    while min_pow_of_2 <= len(objs):
        min_pow_of_2 *= 2
    o = [0] * min_pow_of_2 + [len(objs).to_bytes(32, 'big')] + objs + [b'\x00'*32] * (min_pow_of_2 - len(objs))
    for i in range(min_pow_of_2 - 1, 0, -1):
        o[i] = hash(o[i*2] + o[i*2+1])
    return o[1]

Collision resistance is only guaranteed between objects of the same type, not objects of different types.

Efficiency

Fundamentally, Merkle-hashing instead of regular hashing doubles the amount of data hashes, but because hash functions have fixed costs the overhead is higher. Here are some simulation results, using 111-byte objects for accounts because this is currently roughly the size of a beacon chain ValidatorRecord object:

>>> import blake2b
>>> def hash(x): blake2b(x).digest()[:32]
>>> import time
>>> accounts = [b'\x35' * 111 for _ in range (1000000)]
>>> a = time.time(); x = hash(b''.join(accounts)); print(time.time() - a)
0.42771387100219727
>>> a = time.time(); x = merkle_root(accounts); print(time.time() - a)
1.2481215000152588

Miscellaneous beacon chain changes

Below is a summary of suggestions from miscellaneous internal discussions:

  • 1. Fair staking: Every validator has the same amount BALANCE_AT_RISK = 24 ETH at risk (lower than DEPOSIT_SIZE = 32 ETH). In particular, penalties only apply to the balance at risk (regardless of the size of the balance), and validators with a balance below BALANCE_AT_RISK are automatically exited.
  • 2. Delayed signature inclusion: Aggregate signatures for slot n can be included onchain no earlier than slot n + SIGNATURE_INCLUSION_DELAY (e.g. SIGNATURE_INCLUSION_DELAY = 4). This allows for the safe reduction of SLOT_DURATION (e.g. to 4 seconds), and reduced beacon chain overhead from more efficient aggregation. The fork choice rule is unchanged (it takes into account offchain signatures).
  • 3. Type homogenisation: All integer types are homogenised to uint64. Exceptions are made for signature types, and possibly where a significant performance penalty would be observed.
  • 4. Message domain isolation: All signed messages (including proofs of possession) contain a fork_version field which is checked as part of the same unified signature verification logic.
  • 5. Special object count limit by kind: Every kind of special (LOGOUT, CASPER_SLASHING, etc.) has a separate object count limit per block.
  • 6. Weakened no-surround slashing condition: The no-surround slashing condition is replaced by the following: A validator must not cast two votes such that target1 = source1 + 1, source2 <= source1 and target2 >= target1. (See this ethresear.ch post for context.)
  • 7. Order special objects by kind: Limits the possibility for edge cases when processing special objects.
  • 8. BLS12-381: Finalise the move to the new curve.
  • 9. Fixed-sized shard blocks: For simplicity. When the pool of validators is critically low proofs of custody can be disabled and notaries can rely on data availability proofs only.
  • 10. Withdrawal credentials: Replace withdrawal_address and withdrawal_shard with withdrawal_credentials which is composed of a version number and (for version 0) the hash of a BLS pubkey.
  • 11. PoW receipt root votes: Replace candidate_pow_receipt_root with a map from candidate PoW receipt roots to vote counts to avoid bad PoW receipt root candidates from polluting a full voting period.
  • 12. Reduce PoW receipt root threshold: From 2/3 (needlessly conservative given a committee size of 1,024) to 1/2 for improved liveness.
  • 13. Crosslink hash: Fix crosslink hashes to bytes([0] * 32) until phase 1 for cleanliness and unambiguity.
  • 14. Minimum registration period: Force validators to be registered for some amount of time (e.g. 1 week) to mitigate join-leave attacks (for when re-registrations are possible).
  • 15. Use uint64 for shard number: uint16 will plausibly be too small for the future, and is not consistent with the homogenisation to uint64. For extra hashing performance hash(n) can be cached for n < SHARD_COUNT.
  • 16. Constrain genesis time: GENESIS_TIME should fall at 00:00 UTC.
  • 17. RANDAO cleanup: Replace randao_last_change with a counter called missed_slots_streak to remove the notion of "layer" (and the edge case of a validator revealing twice in a RANDAO layer).
  • 18. Attestations per block: Set a maximum number of attestations per block.

Should all initial validators be set to ACTIVE?

Currently, all initial validators are set to PENDING_ACTIVE during on_startup(), this means the 16384 validators will get slowly trickle in instead of all at once. Not sure if this is correctly intended.
Given MIN_VALIDATOR_SET_CHANGE_INTERVAL is set to ~3%, best case scenario is all validators become ACTIVE after 36 hours

Messy documents

Everything is a huge wall of text (the beacon chain specs have over 5000 words), and although there are headings, there is no table of contents, so one has to read through everything just to learn the general structure of the document.
I seriously recommend that the documents are split into smaller documents, which are then grouped according to topic. I also recommend that tables of contents are added to the documents.
I also recommend that you do not solely use python scripts to explain something you propose.
Explaining should happen in plain text; Code should only be used for definitions.
I therefore recommend that you explain your concepts precisely in plain text, and provide your python code as a reference implementation.
It is way easier to find logical errors in prose than in code, as code contains additional errors that happened to occur when transforming a thought into code.

Data type definitions would be way more legible if you used tables instead of pseudocode.

Cross-references within the documents would allow faster information retrieval and easier comprehension for non-maintainer readers.
I am trying to learn what your progress regarding sharding is, but it is too tiring to read through your messy specification documents.
I think there are many others who feel the same way, and I am also convinced that the current state of affairs is not according to the spirit of rigorous community participation.

It would be in the best interest of the whole project to perform a clean-up of your documents, as it would also improve the outward appearance of the project.

Is MAX_VALIDATOR_CHURN_QUOTIENT wrong?

At most 1/MAX_VALIDATOR_CHURN_QUOTIENT of the validators can change during each validator set change.

MAX_VALIDATOR_CHURN_QUOTIENT = 2^5 = 32

So this equates to saying

At most 1/32 of the validators can change during each validator set change.

So clearly there is a mistake. Perhaps MAX_VALIDATOR_CHURN_QUOTIENT = 2^(-5)?

Merkleise beacon chain blocks

Currently beacon chain blocks are hashed as monoliths. This is not friendly to light clients because beacon chain blocks could be quite large (as currently specced, attestations and specials are unbounded arrays of records).

My suggestion is to hash the individual fields in a BeaconChainBlock (of which there are 8) and then consider those as leaves of a Merkle tree. We then replace ancestor_hashes, oblique_parent_hashes, justified_block_hash, etc. with ancestor_roots, oblique_parent_roots, justified_block_root, etc.

TODO: SimpleSerialize (SSZ) spec

Open an issue for following up our discussion on gitter.

Specification requirements

  • Design rationale
  • Encoding
  • Decoding
  • Types
    • Integers
      • [not in the current implemenation] Specify the signed and unsigned integers support.
      • [not in the current implemenation] Static types?

Anything else? :)

Reference

cc @vbuterin @djrtwo @arnetheduck @mratsim @paulhauner @NatoliChris @PoSeyy

timestamp type inconsistencies

Issue

There are currently type inconsistencies with timestamps.

  • genesis_time is defined in the BeaconState as a hash32
  • timestamp is defined as uint256 in DEPOSIT_PROOF
  • on_startup takes the param genesis_time as type uint64

And then math is done freely between these types

Separate type issue:

  • msg.value is brought in from the pow contract as a uint256 but then it is compared to DEPOSIT_SIZE (a uint64) for equality

Proposed solution

  • cast timestamp types to uint64 representation in the PoW chain contract before adding a branch to the merkle tree, emitting HashChainValue events, and ChainStart events
  • change all timestamp types listed above in the beacon chain to uint64
  • cast msg.value to uint64 before adding to merkle branch and emitting in events in PoW contract
  • change DEPOSIT_PROOF.deposit_data.msg_value to uint64

The amount of checking performed by the PoW chain contract

The spec says the following:

The registration contract emits a log with the various arguments for consumption by the beacon chain. It does not do validation, pushing the registration logic to the beacon chain.

This is reflected in the proposed Vyper PoW chain contract that does no checking.

At a minimum it would be good to check that msg.value==DEPOSIT_SIZE. This avoids a least a couple of issues:

  • If someone mistakenly sends a different value to the contract, their funds will be unrecoverable under the current scheme.
  • I could call deposit() 16384 times with 0 value and junk data, and yet start the Beacon Chain without a single genuine, committed validator.

validator rotation halting

The conditions for validator rotation says that all shard need to have a crosslink from a slot beyond validatot_change_slot, that means an attacker could potentially bribe validators of one shard to delay the rotation by not cross-linking. Especially in beginning, not all the shards are assigned to a committee per CYCLE_LENGTH. In that case, an attacker only needs to bribe MIN_COMMITTEE_SIZE * 2/3 validators to perform this kind of bribing attack. Should we loosen the validator rotation condition to 2/3 crosslink?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.