graphops / subgraph-radio Goto Github PK
View Code? Open in Web Editor NEWGossip about Subgraphs with other Graph Protocol Indexers
Home Page: docs.graphops.xyz/graphcast/radios/subgraph-radio/intro
License: Apache License 2.0
Gossip about Subgraphs with other Graph Protocol Indexers
Home Page: docs.graphops.xyz/graphcast/radios/subgraph-radio/intro
License: Apache License 2.0
Problem statement
Indexer-agent
now handles multi-network management, and indexer-cli
requires protocolNetwork
in indexing rules' schema. This required field is currently missing from offchain requests the radio operator send to the indexer management server.
Expectation proposal
protocolNetwork
network-subgraph
gateway endpoints follow the graph-network-xxx
suffix convention
network-subgraph
.protocolNetwork
to the offchain requestAlternative considerations
If network-subgraph
is an indexer's local network subgraph deployment then the endpoint doesn't follow the suffix convention. We can optionally add a IPFS client that queries for subgraph deployment's manifest file, and read the network name from there. Since we are already adding a specific user config, this extra step can be done later on if there is more solid reason to add IPFS dependencies.
Add ProtocolNetwork
enum to the SDK for strict validations
See PR description
Problem statement
The existing slack integration requires a bot token, but really all we need is a webhook url integration, which is easier to set up as a user.
Expectation proposal
Refactor slack integration to just require webhook url
Problem statement
Currently, the inbound message validation mechanism is set using the ID_VALIDATION
configuration variable. That validation level is used for all features (POI cross-checking and Subgraph Upgrade pre-sync).
The default validation level is indexer
, which is great for the POI cross-checking feature, but it means that all Upgrade intent messages will be discarded, because they're coming from Subgraph Developers (Upgrade intent messages will be accepted only if the level is graph-network-account
, or lower). It's worth mentioning that Upgrade pre-sync messages are being tested for whether the sender is the owner of the Subgraph, but that happens after initial validation.
Users can of course set the validation level to graph-network-account
, that way they'll accept both POI cross-checking messages from Indexers as usual, as well as Upgrade pre-sync messages from Subgraph Developers. That however would compromise security for the POI cross-checking messages.
Expectation proposal
Each feature should have a validation level, adequately corresponding to its function.
Possible solutions
graph-network-account
, while keeping the ID_VALIDATION
for all other features (currently just the POI cross-checking one).Problem statement
Currently the program have separate graceful shutdown listener and hardcoded event durations and timeouts.
Async radio operations timeout durations no longer seem effective.
Expectation proposal
Utilize ControlFlow or something else that
Alternative considerations
Additional context
https://tokio.rs/tokio/topics/shutdown
Example but not effective in our context of complexity - https://github.com/tokio-rs/axum/blob/025144be7e500e498b036bee8ca8c0489c235622/examples/graceful-shutdown/src/main.rs#L31
Describe the bug
When Subgraph Radio is started with a SERVER_PORT
provided, and I try to stop the Radio with ctrl+c, it hangs indefinitely with Shutting down server...
To Reproduce
Steps to reproduce the behavior:
dev
branch) locally with cargo run -p subgraph-radio
Expected behavior
The server should gracefully shutdown, the server port should be freed, and the Radio should exit.
Desktop (please complete the following information):
rustup 1.26.0
Additional context
Not context but more of a question - will it be an issue if we forcefully shut down the server?
Off the back of #81 we need to remove the Locally Tracked public POIs
panel from the Grafana dashboard JSON file
Describe the bug
The instance of Subgraph Radio, running as a Docker container using our mainnet Indexer, is randomly crashing with no errors, after a restart it runs fine. The last logs we see are:
2023-09-07T16:36:01.984928Z DEBUG graphcast_sdk: Signer is not registered at Graphcast Registry. Check Graph Network, e: ParseResponseError("No indexer data queried from registry for GraphcastID: 0x0f2840ec21b3a4af515358deb250f16d3bca3a7e"), account: Account { agent: "0x0f2840ec21b3a4af515358deb250f16d3bca3a7e", account: "0xb4b4570df6f7fe320f10fdfb702dba7e35244550" }
at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/graphcast-sdk-0.4.2/src/lib.rs:284
2023-09-07T16:36:02.239575Z DEBUG subgraph_radio::operator: Message processor timed out, error: "deadline has elapsed"
at subgraph-radio/src/operator/mod.rs:383
This might get resolved with the recent update of the timeout limit for message processing (added as part of #65).
Problem statement
For usability
registered-indexer
to indexer
to permission more accounts on defaultwarn
to debug
for the local sender validation check against its own id_validationon-chain
to comprehensive
for higher topics coverage on the networkindexer_count_by_ppoi
metrics in favor of the ratio stringsExpectation proposal
PR for the changes
Following on from #53, we should add these metrics to the Grafana dashboard. Don't forget to export the new version with the "share externally" toggle enabled.
Problem statement
Protobuf decoding can successfully decode a message to a format that is different from the original format the message was sent in
Expectation proposal
On a message type level, add regex validation on the specialized inputs
Problem statement
Currently we have the POI cross-checking and Subgraph Versioning functionalities enabled by default for Subgraph Radio users, but ideally Indexers should be able to choose which functionalities to enable/disable.
Expectation proposal
We should add a new config variable, something like functionalities
/ features
Document the potential attack on owner sending malicious messages can abuse indexers' resources, outline the capital needed by the owner to perform attacks
Problem statement
Currently version upgrade messages are ephemeral. For debugging and transparency of the radio operations, a user might find the storage of the version upgrade messages useful when the upgrade is actively in progress.
Expectation proposal
VersionUpgradeMessages
as part of persisted_state.get_version_upgrades
function to Http server for stored messages.VersionUpgradeMessages
when graph node is no longer syncing the old deployment.Additional context
Related to #6
Problem statement
We are currently missing E2E tests for
Expectation proposal
Message pipelining is a crucial point for the success and efficiency of Subgraph Radio's POI cross-checking feature. It ensures that public POIs are seamlessly created, sent, received, processed, and cross-checked. This issue goes over the current state and missing pieces of the pipeline.
Upon message block number calculated by a fixed block interval and the synced indexing status, Subgraph Radio operator initiates the process to create a public POI message.
Once a public POI message is created, operator immediately passes the message to Graphcast agent for the message signing and broadcast to the Graphcast Network, ensuring other Indexers are informed about the latest POIs.
Other Indexers' Radios constantly listen for incoming public POI messages. Once a message has been send to the network, they start a local timer for message collection window and accept messages for that message block. When a message is received, it's authenticated and cached for further processing.
After the collection window for a certain block, the radio compares the local POIs against the received public ones. The goal is to establish consensus across all Indexers (radios are limited to compute consensus from the received messages). If there are any new divergence, alerts are triggered.
Provide an overview and insight into the behavior and performance of the Subgraph Radio regarding the handling of POI messages, a persisted state of summary is maintained. This summary should enable users to quickly gauge the health, performance, and trends related to the POI messages without diving deep into individual message details.
Summary Specifications (can be extended)
Σ(Processing times of new messages) / Number of new messages
for each block interval. In combination with Average message frequency, we can better understand the performance of radio in relation to the network.autometrics
for the functions(Total Divergence Detected / Total Message Count) * 100
. Update by recalculating the rate using the updated total divergence and total message count for a given timeframe.More metrics
Problem statement
Indexers can get notifications for the version upgrades. To automate the process of deploying the subgraph on graph-node, we should add this workflow in the radio.
Expectation proposal
VersionUpgradeMessage
handler, send POST request offchain
of the upgraded subgraph hash.
auto_upgrade
, availability of indexer_management_server_url
, and validity of the messageindexingStatus
from graph node to make sure the deployment is being syncedRelated to #6
Problem statement
vec of WakuPeerData
Problem statement
Motivated by forum post, Initial design doc
Radios can already send and receive subgraph versioning messages. Indexers running the radio with notifications should receive version upgrade message from verifiable subgraph owners. Currently we expect them to handle the message manually.
We can consider further handling of the messages and interactions between indexers and subgraph developers. This should close the gap to achieve seamless subgraph version upgrades. For developers, the subgraph can be published and upgraded with deterministic data availability and service quality. For indexers, the subgraph traffic becomes predictable during the upgrade and subgraphs can be pre-synced so query services do not get disrupted.
General workflow
sequenceDiagram
actor DR as Subgraph A Owner
participant GN as Graphcast Network
participant SIR as Subscribed Indexer Radios
participant IMS as Indexer Management Server
SIR-->>GN: Periodic Public PoI messages (topic Subgraph A-0)
Note over DR: Deploy new Subgraph A version 1
DR-->>GN: Send version upgrade message (A-0)
GN-->>SIR: Boardcast version upgrade message (A-0)
activate SIR
SIR->>SIR: Trusted source identity verification
deactivate SIR
opt Sender identity as Subgraph owner verified
opt Auto sync management
SIR->>IMS: POST request to initiate off-chain syncing A-1
IMS->>SIR: Response from Graph Node (Success/Error)
alt Success
SIR->>SIR: Update topics
SIR-->>GN: Subscribe to A-1
SIR-->>GN: Broadcast public status endpoint (A-1)
else Error
SIR-->>GN: Broadcast error message (A-0)
end
end
opt Notifications
activate SIR
SIR->>SIR: Notify events to human
deactivate SIR
end
end
opt Continuous Radio
activate DR
DR-->GN: Collect Public Status messages (A-1)
Note over DR: Monitors for updatedness threshold
deactivate DR
end
DR-->SIR: Switch service from A-0 to A1, deprecate A-0
SIR-->>GN: Unsubscribe to A-0
Expectation proposal
For indexers:
For subgraph developers:
Issue breakdown
Minimal
indexer_management_server_url: Option<String>
to configs. (issue #23)VersionUpgradeMessage
handler, optionally send POST request indexer offchain new_hash
to the url. topics should be automatically updated according to topic_coverage (issue #24)Can extend on the developer + indexer continuous interactions later on
Alternative considerations
Additional context
From old repo: graphops/poi-radio#211
Problem statement
Currently the configurations are supplied into CLI by providing arguments on the same level. We can utilize input file (with either toml or yaml formats) and groups the arguments appropriately.
Expectation proposal
graph_node_endpoint: String, indexer_address: String, registry_subgraph: String, network_subgraph: String, private_key: Option<String>, mnemonic: Option<String>, ...
graphcast_network: GraphcastNetworkName, topics: Vec<String>, coverage: CoverageLevel, collect_message_duration: i64, slack_token: Option<String>, slack_channel: Option<String>, discord_webhook: Option<String>, telegram_token: Option<String>, telegram_chat_id: Option<i64>, metrics_host: String, metrics_port: Option<u16>, server_host: String, server_port: Option<u16>, persistence_file_path: Option<String>, radio_name: String, filter_protocol: Option<bool>, id_validation: IdentityValidation, topic_update_interval: u64, log_level: String, log_format: LogFormat, ...
waku_host: Option<String>, waku_port: Option<String>, waku_node_key: Option<String>, waku_addr: Option<String>, boot_node_addresses: Vec<String>, waku_log_level: Option<String>, discv5_enrs: Option<Vec<String>>, discv5_port: Option<u16>,
Parser::possible_values
and Parser::min_values
with Arg::value_parser
Waku
struct when passing it into GraphcastAgent configurations, which first need an update in the SDKAlternative considerations
just a nice-to-have
Expectation proposal
We should add a banner to Subgraph Studio that lets subgraph developers know that they can use our one-shot CLI to send messages about when they plan to publish a new version of their subgraph(s). That banner should direct them to our docs where we lay out the steps they need to take to send a message.
Alternative considerations
We could think of ways to integrate the one-shot CLI into Subgraph Studio itself, but that would require enormous effort since we would need to either 1. somehow wrap the existing one-shot CLI in WASM and create bindings for JS (which includes bundling the Go and C compilers) or 2. create a JS clone of one-shot CLI (using js-waku
)
Allow users to configure an optional env var NOTIFICATIONS_MODE
, default being live
where we keep the current notification behaviour (sending messages when a divergence is spotted during result comparison), second option being daily
where once a day we send a notification with the current comparison results.
Open questions
Should we add another var - NOTIFICATIONS_TIME
where users can set preferred (local or UTC) time for the daily notifications?
Motivated by #71
Resolved by #35
Problem statement
It is useful to be able to know which dependencies are attached to a given POI attestation. This would help us root cause divergence issues.
Expectation proposal
Imagine an Indexer could pass any number of supported dependencies as configuration. If this were a flag, it could take the form:
--dep <type>:<id>=<uri>
poi-radio --dep postgresql:primary=postgresql://host:5432 --dep chain:mainnet=http://geth:8545
For each provided flag, handler logic could be defined for the type
that allows POI Radio to extract the version. For example, for the chain
type, the POI radio could call web3_clientVersion
at the provided uri
to get the client version. A SQL statement could similarly be executed to get PostgreSQL version.
The resulting dependency information could be attached to POI messages.
Alternative considerations
meta
field in waku messageindexer-service/versions
Problem statement
In order to save the Radio's state between reruns, currently we persist the state in a JSON file. That does the job of seamlessly restarting the Radio, but as Radio traffic, features, message types, collected data, etc, increases we will need a more scaleable approach to data persistence.
Expectation proposal
We should adopt an approach, similar to the one in Listener Radio, which uses sqlx
.
By default, it could use sqlite
and still keep all the data in a local file, but more advanced users should be able to provide a postgres endpoint and store the Radio's data there.
Open questions
Should we still keep JSON as an option?
We need to update the current Subgraph Radio configuration in Stakesquid's stack (both testnet and mainnet).
Problem statement
When subgraph versioning functionality is enabled by an indexer, the indexer will expose indexer management for subgraphs that is covered by auto_upgrade
configuration. This means that a subgraph owner from that covered set of subgraphs can trigger automatic offchain syncing to the indexer. While there is a degree of trust to the subgraph owner and there is no direct attack vector, indexer is more vulnerable to random version upgrade messages.
There is ensured way to verify the new deployment hash is truly the upgrade of the existing subgraph.
Expectation proposal
identifier
and new_hash
Alternative considerations
Enforce a lower bound to subgraph total signal (This might not be so necessarily as the existing subgraph identifier is pre-determined by the local indexer
As it operates, Subgraph Radio surfaces and handles a wide range of data, including:
subgraph-radio
partition (based on the content topic)The Radio handles this data and saves some of it to an in-memory store. That store gets persisted to a local JSON file at an interval, in order for the Radio to pick up where it left off between reruns. The data in the store is made available through a GraphQL endpoint that is exposed on the Radio's HTTP server. Aside from that, some of the data flowing through the Radio is being tracked with the help of Prometheus metrics and can be optionally exposed for scraping by a Prometheus server. The combination of the data in the store and the metrics serves as a base for the Grafana dashboard configuration. We can already see that the Radio has access to a pool of useful data that can be sampled at any time.
Subgraph Radio also defines multiple helper functions that are used to gather external data, needed for it's internal logic. This includes helpers for querying data from different GraphQL APIs (Core Network Subgraph, Registry Subgraph), as well as different parts of the Indexer stack - Graph Node endpoint(s), (optionally) Indexer Management Server, etc. These helpers can be used to fetch any data that might not be readily available in the store, but can be important especially for more complex and highly specific tasks.
All of the data mentioned above is currently scattered in different places and it's hard for users to find what they need, especially if they want to dig deeper and understand a specific event, like when and why a POI divergence has happened, which Indexers agree or disagree with the user's locally generated POI, how they're separated in groups, and more. All this is in the context of the POI cross-checking feature of Subgraph Radio, but a user might also be interested to know when a Subgraph Developer has signalled an intent to upgrade their Subgraph, who that Developer is, when was the last time they published a new version, etc, in the case of the Subgraph Upgrade Pre-sync feature. Looking forward the use cases for a richer interface to the Radio's data set will only grow larger.
Subgraph Radio users can currently utilise the Grafana dashboard JSON provided in the repo, in order to set up their dashboard and monitor different panels based on Prometheus metrics exposed by the Radio. That provides a snapshot of the current state of the Radio and also some historical data of how the metrics have changed over time.
While the Grafana dashboard is certainly helpful for monitoring the state of the Radio, diving further into the Radio's data needs to happen through the HTTP server's GraphQL API. The GraphQL API provides a lot of useful query options, it has its limitations, after all it can only serve data that's readily available to the Radio (in other words - data that is saved locally).
To illustrate the issue more clearly - let's say a user monitors Subgraph Radio using the Grafana dashboard, they notice a POI divergence for a given subgraph on a given block. They then use the HTTP server's GraphQL endpoint to see all the senders that have sent a public POI that differs from the user's locally generated public POI. This is enough manual work as it is, but on top of that the user is still unable to identify those senders by their display name or Indexer URL, for instance. To do that, the user would have to send more request to the Network subgraph GraphQL endpoint.
Subgraph Radio can serve a frontend application that utilises the in-memory store to visualise POI comparison results, as well as other useful data. This frontend should provide an intuitive interface for users to click-through items and dig deeper into relevant data.
We can start with a single view, similar to the Comparison Results panel from the Grafana dashboard (also drawing inspiration from Poifier's interface). It's important to note that the goal of this frontend is not to mimick/duplicate the panels in the dashboard, the dashboard will remain in use, which is why we don't need to copy or replicate the other panels.
This Comparison Results table view should immediately convey the following information:
This table view should be customisable, users should be able to filter results by subgraph, block, comparison result, sender(s), etc. Applying more than one filter at a time should be supported as well. Users should also be able to click on items such as, for instance, Subgraph deployment hash, sender address, block number, to dig deeper and view all the information we can provide for that item (for instance if it's a block number - we should display all the Subgraphs that were compared at that block number, if it's a Subgraph - all the comparison results we've saved for that Subgraph, all the senders that have also attested public POIs for it, all the blocks we've compared it on, etc).
All of this filtering and partitioning of data should use client-side routing.
After the basic tabular view is in place, we can start supporting more advanced operations, such as:
Problem statement
As a first step to creating a Subgraph Radio frontend view, we should replicate the Comparison Results panel from the current Grafana dashboard configuration:
Expectation proposal
A yew.rs app within Subgraph Radio should be bootstrapped with a basic view - one table showing the Comparison Results. None of this even needs to be interactive for now, we just need to visualise it.
The data can be queried from Subgraph Radio's HTTP server's GraphQL endpoint.
Alternative considerations
We could also kick-off the frontend with more/less scope, but this fees like a nice and concrete first step.
Problem statement
Original report by @stake-machine.
Subgraph Radio currently throwing this warning:
2023-08-12T05:45:26.920034Z WARN graphcast_sdk: err_msg: "Subgraph is indexing an unsupported network unknown, please report an issue on https://github.com/graphops/graphcast-rs"
at /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/graphcast-sdk-0.4.0/src/lib.rs:137
Dependent on graphops/graphcast-sdk#266
stake-machine is reporting seeing these errors when running a mainnet Subgraph Radio:
2023-07-18T20:31:25.113Z ERROR gowaku.node2.lightpush lightpush/waku_lightpush.go:163 creating stream to peer {"peer": "16Uiu2HAmAbxWtHhJBMQ37X3qEDCKMhFVzgqnHkwwrkJAkAryRpdM", "error": "failed to dial 16Uiu2HAmAbxWtHhJBMQ37X3qEDCKMhFVzgqnHkwwrkJAkAryRpdM:\n * [/ip4/95.217.154.162/tcp/31900] dial backoff"}
2023-07-18T20:34:17.517Z ERROR gowaku.node2.filter legacy_filter/waku_filter.go:360 requesting subscription {"fullNode": false, "error": "failed to dial 16Uiu2HAm5uqfdh7z2YTEps2MhvsXTk3uvSHZ9AtVkzipZZGbKJEL:\n * [/ip4/5.78.76.185/tcp/31900] dial tcp4 5.78.76.185:31900: connect: connection refused"}
He can't seem to connect to our boot node(s) and therefore is unable to send/receive messages
Problem statement
The current one-shot CLI is perfect for sending one-off subgraph versioning update messages (as well as any message really, with a few tweaks), but we need it to be more user-friendly. This means possibly changing its name and adding a script that runs it within Docker, eliminating the need for users to install all pre-requisites like Go, Clang, etc. We also need to extract it to a separate repo.
Expectation proposal
Users (subgraph devs) should ideally be able to pull a Docker image (GHCR package) and run it with custom arguments.
Alternative considerations
We could also skip this and instead wait for graphcast-web to be functional before recommending the subgraph versioning feature to subgraph developers, but that will take a lot longer and getting early feedback is vital.
Problem statement
Currently the radio only connects to the Graph node, but doesn't connect to the indexer components. For any management of indexing status, the radio should go through indexer's server for consistency in management.
Expectation proposal
indexer_management_server_url: Option<String>
to radio configs.auto_upgrade: Coverage
(default: comprehensive
) to radio configs.
none
as a variant to the Coverage enumRelated to #6
Problem statement
For each deployment and sender, we can expect 2 PublicPoiMessages to fail to validate due to first time sender and first nonce, and subsequently tried as a UpgradeIntentMessage. This should not cause issues since the validity will fail, but might be misleading for the users
Expectation proposal
When parsing version upgrade message, check the Error that caused PublicPoiMessage to fail
Skip over the two ppoi message that we can deterministically expect invalid: first time sender and nonce
Describe the bug
Bug report from Sergey at P2P.org:
We have enabled discord notifications today. We are getting 16-20 messages per minute from the subgraph-radio right now and already got around 25 notifications per each subgraph. We are running v0.1.6. Is there a way to fine-tune the notifications?
Problem statement
Indexers could gather the Deployment health status from gossip peers for more detailed information when they failed to reach consensus from the Public PoI messages.
Spike first
Implementation Expectations
DeploymentHealthMessage
struct should contain fields
deployment: String
health: Health // Enum
errors: Vec<SubgraphError>
error_type: ErrorType // Enum
message: String
block: Optional<BlockPointer>
handler: Optional<String>
deterministic: Bool
DeploymentHealthMessage
from indexing statuses queryDeploymentHealthMessage
Alternative considerations
Generalize message handlers
Please share your feedback, feature suggestions, questions, and everything in between
Problem statement
There's manual work required to add the Subgraph Radio to Launchpad
Expectation proposal
Submit a pull request to launchpad to include the Subgraph Radio as part of the default indexing stack
Add Helm chart to cluster
Doc updates that are required
Alternative considerations
N/A
Additional context
N/A
Problem statement
Indexers should be able to provide warp sync (such as subgraph snapshot and substream flatfiles) as a type of data service, whether it is the latest or historical data. This service allows the requesting node to access the data without indexing themselves. The requesting indexer can have immediate access to recent (or historically ranged) blocks' subgraph data and serve queries for that chunk of data, and the indexer can optionally backfill for the earlier blocks.
To be clear, it is NOT suitable to warp sync over gossipsub and it is NOT suitable to make payment agreements over gossipsub, as it is not efficient nor has security guarantees. However, it is possible to perform a handshake pre-check through gossips before any on-chain activities.
Expectation proposal
snapshot
and ingest
commands for graphman CLI (is this generalizable for substreams and sql-as-a-service?)Brief description of what we can expect for the general process
sequenceDiagram
participant FTPS as Direct FTPS Channel
participant Client as New Indexer to Subgraph A
participant Network as Graphcast Network
participant Server as Existing Indexer Radio (Server)
participant Blockchain as Blockchain
Server-->Client: Monitoring channel A
Server-->>Network: Boardcast Public PoI messages
Client-->>Network: Boardcast Warp Sync request
Note right of Client: A at recent block X, set price
opt Willing and able to accept request
Server-->>Network: Acknowledge message with PoI
end
Client-->Network: Collect Acknowledgement msg
Client->>Client: Select a set of acceptable indexers
Client->>FTPS: Expose FTPS port
Client->>Blockchain: Boardcast request on-chain with Time-Lock, deposit sync reward
Blockchain->>Server: Verify client's on-chain agreement
alt Verfied agreement
Server->>Server: Take database snapshot with verification
Server->>FTPS: Establish FTPS channel
FTPS->>Server: Handshake
Server->>FTPS: Send snapshot over FTPS channel
FTPS->>Client: Receive snapshot
end
Client->>Client: Ingest Snapshot into Local Database
alt Invalid Snapshot
Client->>Blockchain: Proof for failure
Note over Blockchain: Time lock expired
Blockchain->>Client: Deposit (sync reward) + % Indexer collateral
else Verified Snapshot
Blockchain->>Server: Sync reward + collateral
end
Client-->>Network: Boardcast Public PoI messages
Gossip Handshake
On-chain Verifiability
Once the handshake finishes successfully, both sides should opt-in to an on-chain agreement for payment transfer.
After ensuring the opt-in, the responding channel prepares the requested data. Assume on-chain agreement has specifications such as to the subgraph deployment, acceptable range for the block, file format, cryptographic scheme. It is reasonable for the requester to deposit the payment and for the responder to collateralize at the point of opt-in, and reasonable to include a time-lock that allows disputes or automatically transfer the payment and collateral once the lock expires.
File Transfer
Suppose there's a generalizable way to export and import the requested data,
Requesting client or the responding service make a direct connection using FTP, FTPS (secure), or GridFTP (parallel data streams).
Alternative considerations
Potential resources
OpenEth doc
Pokadot doc
Zcash sync library doc
Additional context
libp2p imposes msg size limitation: https://github.com/status-im/nim-libp2p/blob/1681197d67e695e6adccccf56ce0e2d586317d67/libp2p/muxers/mplex/coder.nim#L40
But there's auto-splitting of messages and no limitations on the number of splits
To track #49
Describe the bug
Given this query
query{
comparisonResults(identifier: "QmdemKB9KFeuDcCxRn2iBuRE35LSZ63vDBCdKaBtaw2Qm9") {
deployment
blockNumber
resultType
localAttestation {
ppoi
}
attestations {
senders
stakeWeight
ppoi
}
}
}
the HTTP server's GraphQL endpoint returns:
query{
comparisonResults(identifier: "QmdemKB9KFeuDcCxRn2iBuRE35LSZ63vDBCdKaBtaw2Qm9") {
deployment
blockNumber
resultType
localAttestation {
ppoi
}
attestations {
senders
stakeWeight
ppoi
}
}
}
But the panel in Grafana shows a count ratio 2:0*
for that subgraph hash on that block. The stake ratio also shows a 0*
local stake. This happens for all the divergent subgraph, as well as the ones where there's no remote ppoi to compare (only a local one).
Expected behaviour
The count ratio should be 2:1*
Off the back of #62 , we should implement stricter checks for the INDEXER_ADDRESS
variable. The Radio should fail to start if the provided PRIVATE_KEY
/MNEMONIC
don't resolve to the provided INDEXER_ADDRESS
, using either the Graphcast Registry or the Network Subgraph.
Before that check, we should do a minimal check to see if the provided INDEXER_ADDRESS
is saved by the Radio as a valid eth address, because in some cases (for instance missing quotes in a .yml
file config) the address format can be malformed.
Describe the bug
A wrong addressed is being checked for local sender (only seems to happen sometimes, this is an intermittent log). It doesn't seem to disturb the Radio's operation, it's just misleading to read.
Expected behavior
The address should be the actual address of the Radio's corresponding Indexer address and stake.
Logs
2023-08-16T14:44:31.917038Z INFO subgraph_radio::config: Initializing radio operator for indexer identity, my_address: "310181730876301567336374126543954007774436252014", my_stake: 0.0
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.