ethereum / beacon_chain Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
The condition to check for epoch transition described in 2.1 spec is a bit odd to me.
The epoch transition, which happens only if slot_number >= (epoch_number + 1) // EPOCH_LENGTH
Do you set slot number back to 0 after an epoch transition? if that's the case then every slot is an epoch transition
slot_number = 0
epoch_number = 0
for i in range (0,1000):
if slot_number >= (epoch_number + 1) // 64:
print "epoch transition!"
print epoch_number
print slot_number
epoch_number = epoch_number + 1
slot_number = 0
else:
print "not a epoch transition!"
print epoch_number
print slot_number
slot_number = slot_number + 1
For the following function, what is s
in range(s, s+count)
? and what are we returning? is it just a list of numbered shards that get to cross link in the next epoch?
def get_crosslink_shards(crystallized_state):
start_from = crystallized_state.next_shard
count = len(crystallized_state.active_validators) // NOTARIES_PER_CROSSLINK
if start_from + count <= SHARD_COUNT:
return list(range(s, s+count))
else:
return list(range(start_from, SHARD_COUNT)) + list(range(start_from + count - SHARD_COUNT))
Genesis state is currently only created in conftest and so is not immediately clear to people when surveying the repo.
Is there a write up or a spec on how the shuffling algorithm works? I checked https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ?view and still have troubles understanding this part of the algorithm:
for pos in range(0, 30, 3):
m = int.from_bytes(source[pos:pos+3], 'big')
remaining = validator_count - i
if remaining == 0:
break
if validator_count < rand_max:
replacement_pos = (m % remaining) + i
o[i], o[replacement_pos] = o[replacement_pos], o[i]
i += 1
v2.1 spec now handles overflow of when assigning shard_id to a ShardAndCommittee
in get_new_shuffling
. Current codebase needs to reflect this
shard_id_start = crosslinking_start_shard + \
i * committees_per_slot // slots_per_committee
o.append([ShardAndCommittee(
shard_id = (shard_id_start + j) % SHARD_COUNT,
committee = indices
) for j, indices in enumerate(shard_indices)])
shard_id_start
temp var and use % SHARD_COUNT
to handle overflow. (see spec)Both would work. Just curious.
bitfield[index // 8] & (128 >> (index % 8))
and bitfield[index//8] ^= 128 >> (index % 8)
repeated.(attester_count + 7) // 8
repeated.We can refactor them to be more readable.
signer_bitmask
to notary_bitfield
bitmask
to bitfield
.get_vote(bitfield, index)
def get_vote(bitfield, index)
return not bitfield[i // 8] & (128 >> (index % 8))
set_vote(bitfield, index)
def set_vote(bitfield, index)
return bitfield[index // 8] ^= (128 >> (index % 8))
Better change it to immutable way.
compute_bitfield_length(attester_count)
def compute_bitfield_length(attester_count)
return (attester_count + 7) // 8
Is this line sane? From what I can gather, aggregate_sig
is going to be an array of length 2 because it's storing an x, y
of a single BLS aggregate sig. Therefore num_aggregate_sig()
will not reflect the number of individual signatures applied to the aggregate sig.
I understand this is WIP PoC -- I'm mainly asking to ensure my understanding is valid and I'm not missing something. I'm guessing someone just threw this in without really paying attention to it. I would just ignore it if it there wasn't also a test asserting the functionality is correct.
Thanks
v2.1 adds slot
to CrosslinkRecord to aid in the validator reward's accounting.
'slot': 'int64'
to the CrosslinkRecord
fieldsslot
to 0
for all initial CrosslinkRecords
in CrystallizedState
CrosslinkRecord
during a state recalculation, assign crosslink_record.slot=block.slot_number
for the updated CrosslinkRecord
beacon_chain/beacon_chain/state/state_transition.py
Lines 74 to 87 in 8192419
I'm considering the (very unlikely) scenario when num_active_validators <= attester_count
. In such a scenario, if len(attesters) == num_active_validators
and then a proposer
will need to also be an attester
.
A similar scenario is encountered as skip_count starts to increment.
I envisage this logic to remedy:
if active_validators < 2:
raise Exception()
validator_requirement = config['attester_count'] + skip_count + 1
if (validator_requirement > active_validators):
attesters = indicies[:-1]
else:
attesters[:config['attester_count'] + skip_count]
proposer = indicies[-1]
Note: I'm assuming we don't want to put proposers in the attestors set because it doesn't really achieve anything (of course they attest to it, they made it) and probably just complicates things.
As always, keen to hear your thoughts 🙂
The v2.1 spec has two new helpers defined (get_indices_for_slot
and get_block_hash
). The current code was written before these were added.
The logic for crosslink notaries in the current implementation is lacking. It assumes we will be able to go through every shard every epoch. This is not actually the case because there is to be a fixed number of notaries (NOTARIES_PER_CROSSLINK
) per crosslink, but a constant number of shards and variable number of validators.
NOTARIES_PER_CROSSLINK
get_crosslink_shards
to get the shards that should be crosslinked this epochget_crosslink_notaries
to get the validators that should notarize the given shard for this epoch.get_shard_attesters
(superseded with get_crosslink_notaries
)get_crosslink_shards
Implementations of the helper functions are approximately the following:
def get_crosslink_shards(crystallized_state):
start_from = crystallized_state.next_shard
count = len(crystallized_state.active_validators) // NOTARIES_PER_CROSSLINK
if start_from + count <= SHARD_COUNT:
return list(range(s, s+count))
else:
return list(range(start_from, SHARD_COUNT)) + list(range(start_from + count - SHARD_COUNT))
def get_crosslink_notaries(crystallized_state, shard_id):
crosslink_shards = get_crosslink_shards(crystallized_state)
if shard_id not in crosslink_shards:
return None
all = len(crystallized_state.current_shuffling)
start = all * crosslink_shards.index(shard_id) // len(crosslink_shards)
end = all * (crosslink_shards.index(shard_id)+1) // len(crosslink_shards)
return crystallized_state.current_shuffling[start: end]
v2.1 spec now has new fields (justified_slot
and justified_block_hash
) in Attestation
and some logic regarding these fields when processing attestations.
justified_slot
and justified_block_hash
to Attestation
Verify that the justified_slot and justified_block_hash given are in the chain and are equal to or earlier than the last_justified_slot in the crystallized state.
BLS signatures really slow down testing. We currently do not have any adversarial testing so we can just assume all signatures verify in the context of blockchain testing.
beacon_chain
testingSorry if this isn't the best place to write this, but I don't know where else to put it.
This bares some resemblance to what DFINITY is doing, but it differs a bit. From my understanding of DFINITY, some sample of N of M validators are selected to attest the validity of the state using BLS signatures. At first read, this is really similar to what this system doing. However, in DFINITY the consensus mechanism is decoupled from the state history. This reduces confirmation times significantly.
Alternatively, the Kadena group has created a system using graph theory which enables PoW at-scale. Since each chain structured in a graph of uniform degree N, it takes N "hops" (degree in a graph) to get to any other chain. From the context of a single chain in their "chainweb", you must have your nearest neighbors headers to create a new block. This means that all blocks are confirms in a fixed N blocks, so if degree is 3, it takes 3 blocks to confirm the entire network. This has the side effect of reducing block times significantly, because the hash power on each chain in the chainweb is enabled to be lowered significantly.... really significantly.
I bring this up because this proposed implementation seems to have properties of both a bit, but also deviates in that it's a purely incentive-based model for validating shards.
Like DFINITY there's a random validator selection to for committees of attesters (per block I believe, right?). This means that validation is really super selective and quick, and all you need is some fixed plurality of the attesters to validate a block. That's dope! But you're using RANDAO instead of something like threshold relay (though they're somewhat similar in idea and purpose). Why not go for a system like DFINITY. Nothing jumps out at me which prevents it. Heck the model bares very similar resemblance: https://speakerdeck.com/stringlabs/hail-the-decentralized-cloud?slide=44
Like Kadena, there's a significant incentive for load to be distributed across the shard network, but ulike Kadena there's no difficulty to maintain so it's really about processing shards and having sufficient validator pool to do so. Yet we're still throttled in some way by the main chain, no? Or ... is there a choke point? There might not be, I'm unsure. I am a bit curious what happens if validators stop participating suddenly, though, or ignore certain shards. Can they withhold against specific shards in this scheme? In Kadena, this stops the entire network, so everyone must try to balance their hashpower across the chainweb to keep the entire chainweb going. What about here? Can a shard be neglected?
As the title says, I can't see where block.parent_hash
is checked in _computer_new_active_state()
(convenience link).
I assume the check would look something like this:
assert block.parent_hash == blake(serialize(parent_block))
As always, keen to hear your thoughts
The spec uses a crosslink_seed
and reference to when the cross_link_seed_last_reset
to implement an exponential backoff on how often the shuffling gets updated. The exponential backoff ensures that, if there has not been a crosslink for N epochs, the next committee will have ~N/2 epochs to process through and verify the ~N epochs worth of history for that shard, so the amount of work per second that the committee nodes need to perform is bounded.
These are the general details pulled from the spec. Ensure that all details around shuffling and crosslink_seed are met.
CrystallizedState
# Used to select the committees for each shard
'crosslink_seed': 'hash32',
# Last epoch the crosslink seed was reset
'crosslink_seed_last_reset': 'int64'
current_shuffling
with the following:temp_seed = blake(crosslink_seed + bytes([log2(current_epoch - crosslink_seed_last_reset)]))
crystallized_state.current_shuffling = get_shuffling(temp_seed, len(new_active_validators))
crosslink_seed = active_state.randao
crosslink_seed_last_reset = current_epoch
Should we prevent finalization "if the height of the most recent recorded crosslink for every shard is greater than or equal to crosslink_seed_last_reset" or just prevent a dynasty transition?
Each shard seems to have its own address space. Does it have its own spend space as well? For non-validators, where does their source of value reside? Main chain? Do individuals stake into a shard like we would stake into a plasma chain? How are transactions intended to be sent? To the shard, I assume, which has individual contracts, no? So if I was to send a transaction on this system, would I bind my application to a default shard? How can I balance load across multiple shards? Is it even necessary for contracts to have a separate chain? Can they all reside on main or beacon chain? Really the EVM just needs to read the code, ya? It can technically execute it under whatever context it wants. State needs sharding, not contracts.
It would be nice if we could run flake8 as a step in the CircleCI test run to ensure that we're not merging badly formatted code.
Repo currently doesn't handle the case in which a block's parent spans multiple state recalc points. For example if the parent of slot 258 was at slot 65. Then processing block at 258 would trigger a state recalc for 128, 192, and 256. The mechanism to solve this is described in the spec:
Repeat while
slot - last_state_recalc >= CYCLE_LENGTH
:
Add a while
loop around the initialize_new_cycle
part of compute_state_transition
instead of the current if
See here: https://github.com/ethereum/beacon_chain/blob/master/beacon_chain/state/state_transition.py#L333
get_shuffling
can't handle when sample
larger than validator_count
. The algorithm does an infinite loop. This actually occurs in code when len(active_validators)
is less than ATTESTER_COUNT
and get_attesters_and_signer
is called.
stub: need to think more
probably do one of the following:
sample
This testing lang is up for discussion https://notes.ethereum.org/6ozACHyKR9i86vfkt086Ow#.
EDIT: Proposed YAML format: https://notes.ethereum.org/s/r11GVSBuQ
This issue can either be implemented in conjunction with #13 or after
I'm wondering if you guys have coded the get_cutoffs
algo to split up validators into group at the start of every epoch? that should determine at what height they can make attestations and what shard they can make crosslinks
Between PoS, the sharding, beacons, etc... can we model block time estimates in the notes?
Is the main chain a throttle point for block times, too? I don't think it is, because you lock value from the main chain to the beacon, but it's not 100% confirmed from my reading. There is a main_chain_ref so it could be, not sure. Halp?
The implementation does not currently verify the "proposer" of the block as described in the following:
Verify that the slot % len(get_indices_for_slot(crystallized_state, active_state, slot-1)[0])'th attester in get_indices_for_slot(crystallized_state, active_state, slot-1)[0]is part of at least one of the AttestationRecord objects; this attester can be considered to be the proposer of the block.
process_block
should validate the presence of the proposer in the block as described above.How much power does a proposer on a particular shard actually have? If one of my validators was elected as a proposer on a shard that had my competitor's contract, could I propose an order of transactions which would minimize their profits but still be considered valid?
The reason I look at this differently than leader election on a single chain is because a big player (perhaps state actor) could create a lot of validators and use it to manipulate the market if they by chance got proposer status even, say, 1% of the time across each shard they have interest in manipulating (which could be a few hundred out of the entire network). On a single chain this is significantly less likely, though still very possible. I may be missing information or completely misunderstanding something fundamental though. Is this possible?
From the spec:
def get_attesters_and_proposer(crystallized_state, active_state, skip_count):
attestation_count = min(len(crystallized_state.active_validators), ATTESTER_COUNT)
indices = get_shuffling(active_state.randao, len(crystallized_state.active_validators))
return indices[:attestation_count], indices[attestation_count + skip_count]
This fails if attestation_count + skip_count >= len(active_validators)
which is possible either if skip_count
is large or if len(active_validators) < ATTESTER_COUNT
. I think one should be able to fix this by using (attestation_count + skip_count) % len(active_validators)
as the proposer index (but it would mean that the proposer is also an attester, not sure if that's an issue).
The implementation in this repository is a little bit more complicated because of the sample
parameter, resulting in an infinity loop in get_shuffling: At some point remaining
will be zero so the inner loop breaks, but the outer loop will still continue. Proposed fix here is to just get rid of the sample
complexity and just yield
each o[i]
as soon as it's replaced.
Finally, this check seems to be useless: https://github.com/ethereum/beacon_chain/blob/master/beacon_chain/state/state_transition.py#L59.
There are some state transition helper functions are described in the spec especially. Reorg them together into helpers.py
can help to improve readability.
The current implementation tracks balance_deltas
in ActiveState, but the spec instead tracks recent_attesters
and recent_proposers
to just process what they are owed at the end of each block. This is simpler and clearer.
balance_deltas
from ActiveState'recent_attesters': ['int24']
to ActiveState'recent_proposers': [RecentProposerRecord]
to ActiveStateRecentProposerRecord
object with the following fields:fields = {
# Proposer index
'index': 'int24',
# New RANDAO commitment
'randao_commitment': 'hash32',
# Balance delta
'balance_delta': 'int24'
}
Thanks to @jannikluhn for discussing it with me.
I was thinking about how to implement and test the fork choice rule in the Python PoC codebase, hmm, it would be nicer if we have a database to rebuild/rollback the state instead of just having some dicts. 😬
The first thing occurred to me is AttestationRecord
; the story I think is:
AttestationRecord
s and then broadcast them.AttestationRecord
messages, creates a block, and broadcasts the Block
.Block
, apply the state transition, and update local ActiveState
. (Maybe also CrystallizedState
, up to the slot number and other conditions in the spec)
post_active_state.pending_attestations = prev_active_state.pending_attestations + block.attestations
AttestationRecord
and ActiveState
?It makes sense to change active_state.pending_attestations: [AttestationRecord]
to active_state.pending_attestation_hashes: [Hash32]
if the AttestationRecord
have already been stored in the local DB.
↑ It affects how we serialize
ActiveState
and how we generateactive_state_root
.
Block
?For the DB storage design, we can consider treating the AttestationRecord
like the PoW main chain Transaction
for avoiding double-storing from both Block
and ActiveState
. That said, the Block
consists of BlockHeader
and Attestations
(i.e., BlockBody). There are those k-v pairs:
attestation_hash
-> AttestationRecord
LOOKUP_BLOCK_HEADER_<block_hash>
-> BlockHeader
: the current block + potential proposer sig + attestation_root
- attestations
fieldLOOKUP_ATTESTATION_HASHES_<block_hash>
-> [Hash32]
: get the attestation_hash
by block_hash
active_state_root
-> new ActiveState
(active_state.pending_attestation_hashes: [Hash32]
). When in the crystallized state transition that needs pending_attestations
to updated crosslinks, just query the AttestationRecord
from the local DB.↑ The clients can implement DB scheme in various ways, it doesn't need protocol changes; but, maybe we can optimize some structure in protocol to make great improvement?
BlockHeader
is not a perfect name here...
Haven’t dug into CrystallizedState
yet.
v2.1 spec changed the format of the attestation signature to
hash(slot.to_bytes(8, 'big') + parent_hashes + shard_id + shard_block_hash)
slot % CYCLE_LENGTH
with slot
According to the spec draft, for a block on the beacon chain to be processed by a node, three conditions have to be met:
parent_hash
has already been processed and acceptedmain_chain_ref
has already been processed and acceptedGENESIS_TIME + height * PER_HEIGHT_DELAY + total_skips * PER_SKIP_DELAY
1.
and 2.
can be handled as the orphan blocks. The way we did in pyethereum is:
For the beacon chain, we can also use dict or bitfield to maintain a checklist
to confirm those conditions and avoid redundant computation.
def is_ready_to_process(block):
checklist = get_checklist(block)
if checklist is None:
checklist = create_checklist()
# Condition 1: the parent pointed to by the parent_hash has already been processed and accepted
if check_status(checklist, CHECK_PARENT) and not block.get_parent.is_in_canonical_chain():
return
else:
update_status(checklist, CHECK_PARENT)
# Condition 2: the main chain block pointed to by the main_chain_ref has already been processed and accepted
if check_status(checklist, CHECK_MAIN_CHAIN) and not check(block.check_main_chain_ref()):
return
else:
update_status(checklist, CHECK_PARENT)
# Condition 3: the node’s local clock time is greater than or equal to the minimum timestamp as computed by GENESIS_TIME + height * PER_HEIGHT_DELAY + total_skips * PER_SKIP_DELAY
if check_status(checklist, CHECK_TIMESTAMP):
timestamp_limit = GENESIS_TIME + height * PER_HEIGHT_DELAY + block. total_skip_count * PER_SKIP_DELAY
if time.time() < timestamp_limit:
return
else:
update_status(checklist, CHECK_PARENT)
return True
is_ready_to_process(block)
again.Let's fork.
beacon_chain/chain/fork_choice_rule.py
In the spec draft pseudo-code, the beacon_chain
object is just a list of hashes or blocks. IMO it may be less pain if we start from this a very simple chain and keep it as simple as we can until we are migrating to py-evm.
get_shuffling
hangs if the validator_count
is a power of 2 because of the following line:
https://github.com/ethereum/beacon_chain/blob/master/beacon_chain/state/state_transition.py#L48
I'm not actually sure what that line is supposed to be achieving.
rand_max
depending on if it is needed or notAs this line said:
The concept is described in the spec: https://notes.ethereum.org/s/Syj3QZSxm#Miscellaneous
The RecentProposerRecord
part depends on #7.
We should have CI. I hear circleCI is better than Travis.
Add beacon_chain repo to circleCI, configured to run pytest tests
The state roots validation is not part of the current spec or codebase.
After the spec is revised, we add the state roots validation between here L349:
It should be done outside state transtion function. There are some different processes;
when the validator applies lookahead via get_new_shuffling
, check if ze is the proposer or validator of a certain slot:
if is an attester
elif is_proposer and time to propose block of the certain slot:
compute_state_transition
On receiving a new block
compute_state_transition
We can wrap the is_proposer...
process and modify mock_validator_record
now to get some more genenal API that can be used for both the real client and the tests.
beacon_chain/beacon_chain/state/state_transition.py
Lines 282 to 288 in a161844
I'm a bit confused about the logic here, especially lines 285 and 286.
Should 285 be if !has_voted(new_ffg_bitfield, index)
?
1.)
for all slots
s
in(epoch_number - 2) * EPOCH_LENGTH ... (epoch_number - 1) * EPOCH_LENGTH - 1
Is that correct? It seems odd
# Assume we are in our 3rd epoch cycle
(3 - 2) * 64 = 64
(4 - 1) * 64 - 1 = 191
2.)
Determine the total set of validators that voted for that block at least once
How do we determine that? by looking through that epoch's incoming block's attestations
field? Do we just loop through the attester_bitfield
and count the number of voted attester?
3.)
Determine the total balance of these validators
How do we do that? How do we correlate attester_bitfield
to valiator_record
? that's the only place to get validator balance right?
TLDR; I think there are two bugs in the shuffling function. I suggest how to fix them. I also built a "sandbox" for testing shuffling functions.
I recently went on a deep-dive into shuffling functions to figure out why the beacon_chain one gets stuck in a loop. I have some findings which I will report here.
I believe there are two errors in the present implementation:
I'll give a quick intro to the the Fisher-Yates shuffle and modulo bias for some background. I found it interesting, so you might too.
This is a method of shuffling a list "introduced" by Richard Durstenfeld in 1964 and "popularized" by Donald E. Knuth in The Art of Computer Programming. Apparently it's similar to work done by Ronald Fisher and Frank Yates in 1938. More @ wikipedia.
History aside, it looks like this:
-- To shuffle an array a of n elements (indices 0..n-1):
for i from 0 to n−2 do
j ← random integer such that i ≤ j < n
exchange a[i] and a[j]
It seems that this is the primitive for the shuffling algorithm used in the spec, so lets assume this is what we're going to use. Seems optimal.
For background, modulo bias is introduced when using modulo to "fit" an random integer of a large range into a smaller range when the smaller range is not a clean multiple of the larger range. In other words, large_range % small_range > 0
.
As an example, consider a function rng()
which generates an integer between 0..3
(inclusive) and an application which requires a integer of 0..2
(inclusive). A developer might simply call rng() % 3
, however this introduces a bias because both 0 % 3 = 0
and 3 % 3 = 0
. This skews the post-modulo result in favor of 0
. This is a Bad Thing™.
A method to fix this is to simply filter out all "biased" results, that is any results which are greater than RAND_MAX - (RAND_MAX % n)
, where RAND_MAX
defines the upper bound of an rng (0..RAND_MAX
) and n
is the upper bound of the desired range (0..n
) and RAND_MAX > n
. Lets apply this knowledge to the previous example and define the following function:
def filtered_rng(n):
while True:
x = rng()
if x < rand_max - rand_max % n:
break
return x
The developer can now call filtered_rng(3)
. This will give them results without a modulo bias. The downside is that it's now possible for the function to run indeterminately, because it's relying on rng
not to generate certain numbers. We can just assume that this won't actually run forever, instead it'll just slow us down a every now and then.
I propose the following code to fix these issues (the unused config
var stays for simplicity):
def shuffle(lst,
seed,
config=DEFAULT_CONFIG):
lst_count = len(lst)
assert lst_count <= 16777216
o = [x for x in lst]
source = seed
i = 0
while i < (len(lst) - 1):
source = blake(source)
for pos in range(0, 30, 3):
remaining = lst_count - i
if remaining == 1:
break
m = int.from_bytes(source[pos:pos+3], 'big')
rand_max = 16777216 - 16777216 % remaining
if m < rand_max:
replacement_pos = (m % remaining) + i
o[i], o[replacement_pos] = o[replacement_pos], o[i]
i += 1
return o
You can find this code here in a "shuffling sandbox" I made to test the conclusions I have come to.
First, to address error 1 (modulo-bias), the rand_max
calculation has been moved to where it can dynamically filter the "rng" results on each call to the "rng". If you look at the shuffling psuedo-code (above) you can see that the rng is required to be within dynamic bounds i ≤ j < n
, therefore the "modulo-biased" rng results need to be filtered specific to these bounds, not as per the overall list result (as in the current implementation).
Secondly, to address error 2 (unnecessary iteration), the while
loop has been changed to do one less iteration. The reason this last iteration is unnecessary because we're attempting to swap the last element in the list, n-1
, with any other integer greater or equal to n - 1
but less than n
. The only possible outcome here is for the last element to be swapped with itself which is meaningless.
During the process of this, I made another shuffling implementation here. It is slower than the other implementations (due to abstraction overhead in Python), however it separates the logic into "shuffler" and "rng" components. This is useful in gaining an understanding of the fundamental structures involved.
There are a couple of potential speed-ups I thought of whilst doing this, will test them at some point.
ceil(log2(lst_count - i))
bits. This should reduce the amount of hashes at the cost of doing a little extra math.Point (2) is not very important, it would not change the outcome of the shuffle. Worth noting for client devs. However, point (1) would change the results of the shuffling function and would therefore need to be included in the spec. I'll come back to this once I have some benchmarks on speed increases.
As always, happy for feedback. Sorry about the essay.
Sorry to bother y'all again. One last question - the new crystallized state is not tracking queued
and exited
validator sets. How do we go through queued validators and induct them to be active, and move low-balanced active validators to exit set?
beacon_chain/beacon_chain/state/state_transition.py
Lines 59 to 62 in 8192419
I'm going through the get_shuffling()
function and I don't quite understand the logic here regarding rand_max
. As I understand it, the program says presently that if validator_count > (max_validators / 2)
then we should not shuffle and go into an infinite loop.
I assume the infinite loop thing is an extra indentation where there shouldn't be on line 62. However, I don't understand the "don't shuffle if validator_count < rand_max
logic" (this condition is true whenever validator_count <= (max_validators / 2)
).
Thanks! 🙂
blake2b
is 1.5 times faster than blake2s with 64-bit CPUs
.Question regarding assert len(lst) <= 16777216
in the shuffle function. The max validator count is 2^22 but 16777216 is 2*24. Is that right?
I am looking to understand the meaning behind this line. I presently see two options:
I ask because option (1) is relevant to other clients, whereas (2) is not so much.
The primary question derived from this is: "can we shorten bitfields to reduce state size?"
Thanks
The beacon chain design requires a contract on the main chain for one-way deposits and validator config initialization.
This contract should be written in Vyper, and should probably live in /contracts and be tested with eth-tester/web3.py.
deposit
function with the following arguments
deposit
a payable
function. assert msg.value == 32ETH
deposit
function prevents multiple deposits for same pubkey
deposit
emits a log with all paramsCurrent coverage:
Name Stmts Miss Cover Missing
------------------------------------------------------------------------------
beacon_chain/__init__.py 0 0 100%
beacon_chain/state/active_state.py 8 0 100%
beacon_chain/state/aggregate_vote.py 7 0 100%
beacon_chain/state/block.py 22 1 95% 60
beacon_chain/state/crosslink_record.py 7 0 100%
beacon_chain/state/crystallized_state.py 9 0 100%
beacon_chain/state/partial_crosslink_record.py 7 0 100%
beacon_chain/state/state_transition.py 204 14 93% 47, 180, 191, 193, 201-204, 293, 301-305
beacon_chain/state/validator_record.py 7 0 100%
beacon_chain/utils/blake.py 6 2 67% 3-4
beacon_chain/utils/bls.py 74 0 100%
beacon_chain/utils/simpleserialize.py 80 12 85% 22, 56, 67-68, 87-95
------------------------------------------------------------------------------
TOTAL 431 29 93%
The numbers look good, but we can compare to the spec and find if there’s any edge cases are not included…?
state.state_transition.py
state.block.py
simpleserialize.py
beacon_chain/tests/state/test_full_pos.py
Line 33 in dbc72f2
...
Can someone explain to me what the following code does in get_new_shuffling function? I have a general idea, but want to clarify on what is slots_per_committee
vs committees_per_slot
? Also what are we getting with int(len(avs) // EPOCH_LENGTH // (MIN_COMMITTEE_SIZE * 2)) + 1
?
if len(avs) >= EPOCH_LENGTH * MIN_COMMITTEE_SIZE:
slots_per_committee = int(len(avs) // EPOCH_LENGTH // (MIN_COMMITTEE_SIZE * 2)) + 1
slots_per_committee = 1
else:
committees_per_slot = 1
slots_per_committee = 1
while len(avs) * slots_per_committee < EPOCH_LENGTH * MIN_COMMITTEE_SIZE and slots_per_committee < EPOCH_LENGTH:
slots_per_committee *= 2
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.