ethereum / beacon-apis Goto Github PK
View Code? Open in Web Editor NEWCollection of RESTful APIs provided by Ethereum Beacon nodes
Home Page: https://ethereum.github.io/beacon-APIs/
License: Creative Commons Zero v1.0 Universal
Collection of RESTful APIs provided by Ethereum Beacon nodes
Home Page: https://ethereum.github.io/beacon-APIs/
License: Creative Commons Zero v1.0 Universal
It is desirable in many contexts to support streaming as a first class citizen in this API. Streaming/event type functionality is already seen to be useful in eth1 and is also currently being used by consumers of both the prysm API and lighthouse websockets.
[Actual protocol aside] The two methods discussed for supporting this in the API are specific stream endpoints or a single event stream.
/stream
versions of a resource endpoint -- e.g. /resources/stream
EVENT new_block 0xfffffff
and then calls /beacon/chain/block/0xffffff
to retrieve the alerted about block.Please take a look at proposed API endpoints (sheet `“[WG] Proposal"):
https://docs.google.com/spreadsheets/d/1kVIx6GvzVLwNYbcd-Fj8YUlPf4qGrWUlS35uaTnIAVg/edit#gid=1802603696
Feel free to comment here or in sheet if you have questions. I would like you to checkout "Oustanding questions" as well.
API endpoints that aren't being contested will be opened here in form of PR where further things like descriptions, validation, response codes etc will be discussed.
cc-ing interested parties @paulhauner @hwwhww @arnetheduck @mratsim @terencechain @prestonvanloon @AgeManning @wemeetagain @mkalinin @ajsutton @rolfyone @moles1 @skmgoldin
Hi there :) I'm looking for some clarity on this line, please:
Specifically, what the list of "current head slot blocks" contains. I see these possibilities:
Either (1) or (3) seems to make sense to me and I'd lead to (1) since (3) is already covered (indirectly) by debug/beacon/heads
.
Thus far, this repo only consists of "validator" APIs. To aid in application developers, we aim to conform upon a set of user-level "beacon node" APIs.
Currently, Prysmatic has a number of user-level APIs defined in prysmaticlabs/ethereumapis with some decent traction with block explorers and other applications, and Lighthouse has a number of APIs that have also begun to be used by various applications (link?).
I propose that we make a list of the obvious items first (and especially any that have overlap between prysmatic and lighthouse). We can list them out, get a feel for the general categories, structure of arguments, etc and move from there. Explicit notes on how/why users are using them today will be helpful in better understanding the API in general.
After that, we can debate anything that seemed "non-obvious" or maybe client specific
Had a bit of a discussion around the failure state of having no genesis time to report on.
It seems like this is more of an expected result than a failure result - pre-genesis the value won't be set.
Returning a 204 (no content) instead would be more appropriate potentially, and allows the API to still be queried without generating errors in this case.
In Teku we're following that theory currently, just because it seems to make sense.
Types are described as here https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.3.md#data-types
Some schemas such as Root / Hex / Signatures have an invalid format such as hex
(non-existent) or bytes
(I guess a typo and should be byte
), see https://github.com/ethereum/eth2.0-APIs/blob/master/types/primitive.yaml#L33
Other malformed schemas may exist in the repository.
Validators may ask the beacon node to subscribe to an attestation subnet for three reasons:
Currently the /eth/v1/validator/beacon_committee_subscriptions
API can handle the first two, but there doesn't appear to be any way to signal that a subscription is persistent at should be reported via MetaData and ENR. It's also awkward to have to try and reverse engineer a committee ID out of the subnet ID the validator randomly selects (and may not be possible given committee ID to subnet ID mapping requires the number of active validators).
My suggestion would be that the request body is changed to something like:
[
{
"subnet_index": uin64,
"committee_index": uint64,
"slot": uint64,
"subscription_type": ("attestation" | "aggregate" | "persistent")
}
]
One of subnet_index
or committee_index
would be required. In all cases, the node should ensure peers are available on the subnet. When subscription_type
is aggregate
or persistent
the node must subscribe to the gossip topic for the subnet. When subscription_type
is persistent
the node should advertise the subnet subscription in it's MetaData
and ENR
.
Most likely validator clients would always provide a subnet_index
for persistent
subscriptions and commitee_index
for attestation
and aggregate
subscriptions but I don't see any reason not to allow either if the Validator Client is somehow able to determine it.
The committees_at_slot
value is dropped as it doesn't appear to be needed for anything and calculating it requires the number of active validators.
As brought up by @skmgoldin on that last eth2 call, there is a strong desire from application developers for eth2 clients to provide a standard/conformant API.
We have this repo which is intended to be a collection of RESTful HTTP apis for various domains on the eth2 client. The APIs found in this repo are currently written in OpenAPI 3.0 in YAML. Some
So far, this repo only defines a beacon<->validator API for the purposes of writing generic "validator clients", but the intention is to host many other user-level APIs for the beacon chain (phase 0) and shard chains (phase 1). To date, the definition of this API has been largely collaborative across clients and the promise of standard/interoperable VCs seems in reach.
Prysmatic has a number of user-level APIs defined in prysmaticlabs/ethereumapis with some decent traction with block explorers and other applications, and Lighthouse has a number of APIs that have also begun to be used by various applications (link?).
There are a few things we need to work through:
We'll tackle [1] here and address the other two in separate issues [#25, #26]
This has been addressed in a number of past conversations and the general consensus was to land on the OpenAPI format.
@paulhauner put together a argument for continuing to write the APIs in OpenAPI, and it seems that it is still the general consensus is to do so.
The alternatives are to define the API (1) in protobufs or (2) in SSZ and provide standard conversion between all three. Some arguments are for/against are made here and were discussed briefly on the last eth2 call.
If we choose to go the OpenAPI native route, we should do an investigation of the types currently used in this repo to ensure that they are sound for our purposes and being easily converted in alternative contexts.
Types to discuss:
0x00
hex-strings in APIs which is why this repo originally used the formatuint64
as a string type (instead of native json integer) because some applications like Postman and Swagger round down integer values greater than int53Need to figure out a versioning for the API
Action items:
At the bare minimum, we should have some "name" (for basic protocol segregation) and a version (e.g. Prysm currently uses /eth/{version}
).
Questions:
/eth
vs /eth2
-- any strong preference?
/eth2
in libp2p protocols/eth2
might be a valuable distinction between consensus protocol and user-land (eth1)/eth
is nice because the plan is for it all to be one Ethereum protocol sooner rather than laterrefs
Client/v0.0.0/githash/os/other/stuff
From @tzapu/eth/{version}
] from @prestonvanloon
/eth
prefix as to namespace eth from other services if multiple API are being load balanced from the same root URL"Do we expect servers to serve multiple versions simultaneously? Likely.
If so, a single endpoint that returns a list of versions should work.
/eth/versions
or /eth/supported_versions
work for everyone? or is there a standard practice we should conform to?
2 Query parameters, slot and parent_root, and it's stipulated if no parameters are specified that slot defaults to head...
what if both parameters are specified? is it invalid, an OR, or an AND condition?
Assuming it's an AND (or OR really), does slot default to head only if there's no parent_root specified?
Assuming it's not defaulting slot, and a user specifies parent_root only, then is the expectation that the result would be all canonical and non canonical blocks with this parent?
If the block is finalized and only the canonical chain is stored, then we might only get the block with the parent_root on the canonical chain, because some clients may not store blocks that didn't make the chain long term. is that adequate for this endpoint?
Looking at the getAggregatedAttestation
definition, the HTTP status code 403 (Forbidden) is being used to represent an application-specific state (Beacon node was not assigned to aggregate on that subnet).
For any deployment scenario in which APIs being protected by an authorization system, it will be hard to differentiate when the 403 is due to unauthorized access or because the beacon node isn't subscribed to a subnet.
I suggest that we use a less common status code to represent application-state based responses. Or maybe add some extra info on the response so clients can differentiate them.
Current status:
Ideas:
/eth/v1/validator/aggregate_attestation
may not be able to create an aggregate if the beacon node doesn't have any attestations with the specified attestation data root. In that case, currently both Teku and Lighthouse are returning a 404 response code but it's not actually listed as one of the possible response codes in the OpenAPI docs.
Also, while 404 isn't an unreasonable option, it has led to a bunch of confusion about whether the URL being requested was wrong (typically 4xx response codes mean the caller did something wrong). In this case the call was valid but there was nothing to return. I wonder if 204 No Content would be a better response code.
In any case, the OpenAPI docs should be updated to explicitly list what response code is used when no known attestations matching that data root are found.
Even though json definition doesn't have limit in int size, it will cause troubles in tools like Postman and Swagger where values greater than int53 will be rounded down. If we define it as string, json parser might not assume native types in different languages
I would propose new validator API as current API is causing troubles in this aggregation strategy.
Problems are:
duties
for current epoch to check if we are proposer and for future epoch so we can subscribe to attestations (call for proposer checking will subscribe us (beacon node) to unnecessary topics)I would propose following:
/eth/validator/duties/proposer/{epoch}
- returns {slot: publicKey}
mapping for every slot in given epoch/eth/validator/duties/attester/{epoch}?publicKeys=public1, public2
- returns [{publicKey, slot, committeeIndex}]
for given publicKeys and subscribes to topics related to returned commiteeIndices/eth/validator/attestations/{commiteeIndex}
- returns all attestations collected from that committee index topic/eth/validator/aggregate
Other apis will remain the same (attestation type needs change).
After we have agreed api, I'll open PR on repo.
@GregTheGreek commented on Wed Oct 16 2019
This issue details the JSON RPC API that is need on the Eth2 EE in order to support an Eth1 execution environment. This will not be relevant until Phase2, but discussions should begin now.
Migrating contract state to Eth2.0 is an essential step in the roadmap (not covered in this document), this requires backwards compatibility for existing developer tooling such as ethers.js and web3.js. Otherwise all DApps currently utilizing Eth1 contracts would not be useable (i.e the web-apps would quite literally blow up).
The specification for Eth1 can be found here, and should be implemented 1-1. Naturally, this should be fairly trivial since most languages already have an existing Eth1 implementation, with a fully function JSON RPC.
It should be noted that there is support for pub/sub (web sockets), although most providers and DApps should be checking for fallbacks, we should be implementing both the JSON-RPC schema and pub/sub.
sorry if this is already in the proposals, i was not able to identify it.
i think there is a need for an endpoint that returns all validator duties for the next epoch.
basically this https://pegasyseng.github.io/teku/#operation/postValidatorDuties
without the pubKey filter, or https://api.prylabs.net/#/BeaconChain/ListValidatorAssignments
i have seen /v1/validator/duties/attester/{epoch}
and /v1/validator/duties/proposer/{epoch}
but one is filtered by pub key and one isn't which feels a bit not homogeneous and they won't give you the full picture anyway, unless both allowed to be called without a pubKey.
this would be useful for indexers of on chain data, to get responsibilities before they happen and update/alert on them as they happen/are missed.
While we are actively iterating on and thinking about the API, it would be worthwhile to maintain a list of early API consumers. Please post them in the comments below and I'll update the list her
Using the openapi-generator
go module with this spec produces numbered (not named!) response object types. This leads to two difficulties for me:
It would be nicer if, for example, the GetBlockAttestations
operation had a response type like GetBlockAttestationsResponse
instead of InlineResponse20011
. I know it is possible to generate named response types; the OpenAPI spec that Teku generates from its own codebase produces named responses, for instance.
By the way, slight shill: eth2-comply is now ready to use. 🙂
@mpetrunic @decanus
new build failed on master...
I can look into it in a bit, but if either of you have a quick fix, go for it
I'm planning to extract all types to separate file/files so it can be reused in files and make it easier to maintain.
Do we wan't to generate types from ssz (or something else) or write it manually?
Pro:
Con:
Pro:
Con:
Should skipped slots return a 404 in the /eth/v1/beacon/blocks/{block_id}
endpoint? In Lighthouse we return information about the most recent non-skipped slot presently. I don't see it explicitly in the spec.
Relevant lighthouse issue: sigp/lighthouse#2186
When splitting out the validator client from teku, there's been questions raised about why the API would split out validator duties retrieval for attestations and proposals, because from the code perspective they're both needed to be called to figure out what the validator needs to do.
Would there be value in combining /eth/v1/validator/duties/proposer/{epoch}
and /eth/v1/validator/duties/attester/{epoch}
into something like /eth/v1/validator/duties/{epoch}
?
It seems desirable for standard conversions to exist between formats (YAML, protosbufs, SSZ, etc), regardless of whichever format this API repo uses to spec things.
Base on conversations from the recent eth2 call, I propose the following
If the above investigations prove fruitful, then (following @paulhauner's suggestion) we should integrate autoconversions in the to CI to ensure that the API remains usable across the formats we explicitly want to support.
Hey, is there any chance a GraphQL API could be standardized along with any REST or JSON RPC APIs being discussed?
The comment for epoch
on this endpoint states: "Epoch for which to calculate committees. Defaults to beacon state epoch." This doesn't fit with our existing usage of endpoints, where none of the other path values are optional.
I can think of a couple of solutions. We could make two endpoints:
/eth/v1/beacon/states/{state_id}/committees
which returns the committees for the current epoch as per the state_id
/eth/v1/beacon/states/{state_id}/committees/{epoch}
which returns the committees for the specified epoch
and state_id
If it is felt that this stretches the sub-resource definition past breaking point then as an alternative we could go with just the first endpoint and change epoch
to an optional query parameter.
[Note, this was previously discussed by a few of us but resurfacing now to make sure everyone on board]
As discussed on the API call today, we need to decide on integer representation in the API. Using native integers is not favorable due to the JS limitation of 53-bit. The natural choice becomes a string. The two competing ideas are a decimal representation or a hex representation with an 0x
prefix.
IMO, the readability argument for decimal are a bit more compelling than the eth1 conformance. Conformance with eth1 must be augmented by the fact that we are using little-endian where that is big-endian, and the eth1 API is a separate JSON-RPC.
Apt-get app
I think the validator_balances should be able to result in a 404 for the same reasons as the regular validators getter. Related PR: #111 missed this I think.
Also, does an out-of-range validator index result in 404 or 400? I think 404 is better, but it's not clear right now why 400 is the only thing in the spec. This applies both to the validators and validator_balances endpoints.
Edit: and should it be returning an indexed error, to specify which parts of the query were bad?
This endpoint has a few interesting fields which would be good to clarify I think.
I noticed that there wasn't a count
endpoint. So for apps that just want the count, they have to obtain all the peer data?
For this specific endpoint, there is the address
field which the examples show a multiaddr. Firstly, what exactly should this address be. Is it simply the multiaddr that can potentially be obtained from the ENR, in which case its likely redundant information, and also an ENR may not contain this information.
Perhaps this is the "last seen address". I.e the Socket that we observe the connection on?. If so, this potentially could have multiple values for implementations that allow multiple values. Should this then be a comma separated list?
Also the example shows the TCP port, but most nodes will have UDP also. I guess this is attempting to show just the libp2p listening address, or last seen socket addr on the TCP side?
Another point to clarify is the state
and direction
field. In Lighthouse these are grouped into a single enum. If a peer is disconnected, it doesn't have an ingoing
or outgoing
state. Can this be nullable in this instance, or do we need to keep track of past connected states for disconnected peers?
In general, if in the future we allow multiple connections per peer, the peer could have both ingoing and outgoing connections. In this case which should be returned?
Because a large number of indexes might need to be passed, attester duties can become quite a long URL, exceeding the allowed size of a GET request.
To further complicate, it's dependent on the http service - so clients would not be confident in the size of request they can make before reaching failure.
Teku was an 8K header size, which allowed for about 1000 small indexes if unencoded, or significantly less if the reserved ,
is URI encoded (random numbers below 100000). I haven't specifically tested other clients to determine their header size limits.
Updating this specific endpoint to a POST will allow us to push the indexes parameter into JSON in the body of the request, something like: {"index": ["1","2"]}
. This would then avoid the request size limits.
The loss of caching we would have in a GET is minimal, because we're requesting per epoch and for a specific list of validators, so its very possible we'd not be getting effectively cached anyway (IMO).
We've recently found that in order to avoid a BeaconState
read we need to thread the get_committee_count_at_slot
through the validator duties and up into the subnet subscription.
The reason is that the allocation of an attestation to a subnet is fork-dependant. I.e., two attestations with the same index
and slot
, but different beacon_block_root
can be required to be published in two different subnets (only in rare circumstances). There's a more in-depth explanation here: sigp/lighthouse#1257 (comment).
I'd be interested in hearing peoples thoughts on including this get_committee_count_at_slot
value in:
/validator/duties/{epoch}/attester
/validator/beacon_committee_subscriptions
The reasoning being is that it prevents the BN from needing to know the active validator count for attestation.data.beacon_block_root
when processing a /validator/beacon_committee_subscriptions
request.
On our end, we've called the field committee_count_at_slot
but we're open to change :)
TL;DR: Should we use loosely-defined markdown instead of Swagger?
We've been working to bring our Beacon Node REST API up-to-date to submit here, however I get the feeling the Swagger format isn't right for a eth2.0-xxxx
repo. It seems to me that this repo should contain easily-maintainable markdown docs.
Before we go and submit more Swagger YAML to this repo, I wanted to reach out and get feedback about should this repo use Swagger YAML or markdown to outline eth2.0 APIs?
I've listed some pros and cons for Swagger vs Markdown below. Keen to hear your thoughts.
Example: we have the Minimal Beacon Node API for Validator on SwaggerHub.
Working in Swagger YAML is not necessarily ergonomic. Consider this 200
response:
'200':
description: Success response
content:
application/json:
schema:
type: object
properties:
epoch:
type: integer
format: uint64
description: >-
The epoch for which the returned active validator changes
are provided.
balances:
type: array
items:
title: ValidatorBalances
type: object
properties:
pubkey:
$ref: '#/components/schemas/pubkey'
index:
type: integer
format: uint64
description: The global ValidatorIndex of the validator.
balance:
type: integer
format: uint64
description: >-
The balance of the validator at the specified epoch,
expressed in Gwei
I suspect maintaining this will become frustrating, especially if it's a collaborative document.
You can modify this YAML file in your favourite IDE, however you need to copy-paste it into SwaggerHub to compile it and render the UI (AFAIK).
The YAML file itself is not accessible to developers who want to figure out how to consume the API, in the same way that markdown is auto-rendered by Github. In order to view the API, they need to either:
AFAIK, there's no way to get SwaggerHub to reference a YAML file in a Github repo at some commit. We'd need a CI service to update SwaggerHub or push to a self-hosted instance.
We'd also need some CI service to validate the Swagger file.
We probably don't want to do full-depth object specification (i.e., fully define a BeaconState
in Swagger), because then we need to be sure to keep it sync'ed with the spec. Doing this manually is laborious and (I think) likely to be neglected. We could write a script, but then we'd need to maintain that instead.
To avoid doing full-depth object specification we could stub-out complex objects, but then we'd lose some of the features of Swagger (e.g., code generation, click-y example requests, etc.)
Something like the JSON-RPC
document (or better alternative), but for HTTP.
BeaconBlock
and you can take that to be whatever spec version you wish).Add a version prefix to the routes for API stability and ease of upgradability.
Consumers of the API may also like to be able to inspect an endpoint for supported versions.
Suggestion brought up during the EthereumJS team retreat. @ricmoo
There is currently no way to query the current slot or epoch - only the previous justified, current justified, and finalized epoch.
It's possible to calculate this manually after querying the genesis time & seconds per slot, but it would be very convenient to have an endpoint for this data.
This seems like it could fit under /eth/v1/beacon/states/{state_id}/finality_checkpoints
, but that might be a bit of a stretch. Not sure if there is another sensible endpoint it could live under, or if it should have its own.
In Eth1 apis, certain endpoints are practically unusable because the amount of data returned is unbounded and becomes too large for certain consumers.
If we have endpoints that return larger datasets, it would be good to consider how this can be broken up into responses of manageable/bounded size.
Suggestion brought up during EthereumJS team meeting. @ricmoo
Sorry for the late flagging of this, I've just started actually moving some of the teku endpoints to serve new endpoints.
Syncing is a slightly more complicated domain than it probably appears on first glance.
A given node has a good idea of what slot it's at, and can pretty easily find what slot other peers are at, and based on trailing distances or some other metric decide it actually needs to sync data from a peer.
The current endpoint arguably over-simplifies the scope of the problem, by only having 2 fields:
head_slot: uint64,
sync_distance: uint64
Head slot describes 'head slot the node is trying to reach', and sync_distance is 0 if the node is in sync.
It's probably not uncommon for nodes to be in some kind of state where they're a little behind and just processing gossip to catch up on the state of play.
Lighthouse/teku nodes had syncing endpoint returning an explicit syncing flag, the head_slot of the best peer, the current_slot that the node is up to, and the start_slot (which i have never really needed, but is basically the first non final slot from the nodes perspective, so could potentially be winding back to that point if it ends up syncing a different node)
The value in this is that it builds a picture on the state of play of the node you're querying.
explicitly told if the node is syncing via a boolean
have the slot the node is at, and the slot of its best peer.
To best map currently, I would basically have to look at whether our node is syncing, and if it is immediately pass back our current slot, and sync_distance 0 to say we're not syncing.
If we are syncing, head_slot would then be the head_slot of the best peer that I have, and sync_distance would be head_slot - current_slot
Is this something we should consider adding more context to before going too far down the implementation route for this endpoint?
If we do stay with current description, we should probably at the very least rename the 'head_slot' due to that being a term currently in use by ethstats, and I think we're actually wanting 'target_slot' or similar.
We should add depecation notice to v1 version of endpoints like block and state that are going to be changed in altair fork so we can remove them in next iteration. We should also explain that these endpoints return only pre-altair blocks/slots.
One of the most important pieces of eth2 in phase 0 is access to data. We have collected service schema definitions that we find valuable to validator client operators, block explorers, and anyone interested in the data of eth2. We collected data API use cases in this products requirement document and received feedback from Danny, Jannik, Zak and many others.
We turned that feedback into a self-describing schema utilizing protocol buffers with the hope that we can start eth 2.0 with a consistent and easy to use API. The schema is defined in this repo: https://github.com/prysmaticlabs/ethereumapis
I'd love to hear more feedback on the products requirement document and the schema repo. I'd like to help migrate ethereumapis repo to here
How should we handle a failure on a single validator in a POST to the attester duties endpoint? The IndexedErrorMessage
introduced in #102 won't quite work with this endpoint if it still needs to return successful duties for other validators in the batch.
Suggestion from @paulhauner: make all the non-pubkey fields optional and return null when we don’t know the validator
Another potential option: make the response array fixed-length and ordered based on the request array, and each element of the response array can be nullable
PR #136 by @rolfyone has a potential inefficiency if the validator does 1 sync committee period of lookahead and pushes a subscription. With until_epoch
it will inform the node when to unsubscribe but not when to subscribe.
I believe it would be simpler to replace until_epoch
with period
. With a single number the node exactly when that subscription is of interest, and compute from_epoch
and until_epoch
Which API...I can't find it...thanks.
There has been some discussion in #96 with regarding how to determine if some event on the BN will invalidate the cached duties in a VC. To quote @ajsutton (mainly cause the contained link has some useful background):
The key reasoning behind getting an event for empty slots is to provide a guarantee from the beacon node that it will notify the Validator Client if a later chain reorg affects that slot. The vc may need to recalculate duties because of that switch from empty to filled block. https://hackmd.io/kutQ7smJRZ-sJNSuY1WWVA#Why-head-must-be-published-for-empty-slots sets out the scenario.
We've faced this issue several times in Lighthouse and I believe we have a concise solution, which I detail here.
Thanks to @ajsutton for letting me run this past him and giving feedback.
First define two new terms which refer to the head block
and state
of a beacon chain:
proposer_shuffling_pivot_root
: just like an attestation target_root
, this is the block_root at the start of the epoch which contains the head block. (Or the genesis root in the case of underflow).
get_block_root_at_slot(state, compute_start_slot_at_epoch(get_current_epoch(state))
.attester_shuffling_pivot_root
: this is the block_root in the last slot of the penultimate epoch, relative to the head block. (Or the genesis root in the case of underflow).
get_block_root_at_slot(state, max(compute_start_slot_at_epoch(get_current_epoch(state) - MIN_SEED_LOOKAHEAD) - 1, GENESIS_SLOT)))
Then, include them in the API:
proposer_shuffling_pivot_root
and attester_shuffling_pivot_root
to the head
event.proposer_shuffling_pivot_root
to the .../validator/duties/proposer/{epoch}
return result.attester_shuffling_pivot_root
to the .../validator/duties/attester/{epoch}
return result.The (target_root, epoch)
tuple uniquely identifies the list of proposers for epoch
, whilst the (shuffling_pivot_root, epoch)
uniquely identifies the shuffled list of attesters for epoch
.
With these two new pieces of information a VC is able to cheaply determine if any head
event has invalidated their proposer/attester shuffling.
There is some VC which caches duties like this:
struct AttesterDuty {
// The epoch to which this shuffling pertains. I.e., all attestation slots are within this epoch.
epoch: Epoch,
// The root which identifies this shuffling.
shuffling_pivot_root: Hash256,
validator_index: ValidatorIndex,
attestation_slot: Option<Slot>,
.. // Other stuff related to attesting.
}
It's currently epoch current_epoch
and we store all the AttesterDuty
for this epoch in a list called attester_duties
.
When we get a head
event we run this code:
let head_epoch = epoch(head.slot);
if head_epoch > current_epoch {
panic!("probably a clock sync issue?")
}
let shuffling_pivot_root = if head_epoch + 1 >= current_epoch {
head.attester_shuffling_pivot_root
} else {
head.block
};
let invalidated_validators = vec![]; // empty list
for duty in attester_duties {
if (duty.shuffling_pivot_root, duty.epoch) != (shuffling_pivot_root, head_epoch) {
invalidated_validators.push(duty.validator_index)
}
}
update_duties(invalidated_validators);
Just like the previous example, we store proposer duties like this:
struct ProposerDuty {
// The epoch to which this shuffling pertains. I.e., all proposal slots are within this epoch.
epoch: Epoch,
// The root which identifies this shuffling.
shuffling_pivot_root: Hash256,
validator_index: ValidatorIndex,
// The list of slots where the proposer should propose.
proposal_slots: Vec<Slot>
}
It's currently epoch current_epoch
and we store all the ProposerDuty
for this epoch in a list called proposer_duties
.
When we get a head
event we run this code:
let head_epoch = epoch(head.slot);
if head_epoch > current_epoch {
panic!("probably a clock sync issue?")
}
let shuffling_pivot_root = if head_epoch == current_epoch {
head.proposer_shuffling_pivot_root
} else {
head.block
};
let invalidated_validators = vec![]; // empty list
for duty in proposer_duties {
if (duty.shuffling_pivot_root, duty.epoch) != (shuffling_pivot_root, head_epoch) {
invalidated_validators.push(duty.validator_index)
}
}
update_duties(invalidated_validators);
I believe this should have a fairly minimal impact on BN implementations, apart from perhaps having to thread these new pivot roots through their caches. Unsurprisingly, Lighthouse's shuffling caches are keyed by this exact scheme.
The reason I think this is low-impact is because it's trivial to compute the pivot roots from the BeaconState
which was used to generate the shuffling. I expect that clients have access to a state
when they're determining the shuffling for some epoch.
I understand that this appears rather complex, but I think it's a concise solution and therefore ultimately less cumbersome and more accurate/efficient than existing solutions.
You might argue that this is leaking information about caches into the API, but I'm not convinced this is the case. I believe this is the fundamental "source of truth" method to detecting re-orgs that affect proposer/attester shuffling.
Starting a validator client for the same validator twice is of course a user error. However, it's probably the most common user error that leads to slashing. If the to clients are started at the same time, there is no way to catch this error; however, in all other cases, the second client being started can detect that the last attestation made by the validator does not match its local storage, and refuse operation (without an override flag).
Suggested solution:
The route GET /eth/v1/validator/sync_committee_contribution
#138 expects the validator to provide a beacon_block_root
.
This beacon_block_root
have had to be previously signed by the validator at 1/3 of the slot. What's the recommended strategy to get this info from the beacon node?
Rule should be in .spectral.yml
in root directory of repository.It should throw error if non double quotes are used.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.