a16z / helios Goto Github PK
View Code? Open in Web Editor NEWA fast, secure, and portable light client for Ethereum
License: MIT License
A fast, secure, and portable light client for Ethereum
License: MIT License
Right now we use 10M gas limit to create the access list, which can cause errors if your address doesn't have enough funds to pay for the gas. To fix, use an untrusted eth_estimateGas
to set the gas limit. We should also gracefully handle errors when someone uses an address that has no gas funds as the from
parameter. When this happens, we can just return an empty accounts map and force the evm to fetch state on the fly (will be slow).
Currently execution::types::CallOpts
is not exported, and it is needed to be able to use call
function of Helios client.
Will submit a PR.
I quote from the Mustekala project by MetaMask (shelved during crypto winter):
"Letβs say there are 10,000 discoverable online/active Full Ethereum Nodes. If we were to turn all Metamask users (About 1.5 million) into Light Clients it would overwhelm them very quickly."
It could be cool for Helios to contribute back to the p2p network if it is to reach widespread adoption.
The Mustekala architecture docs could be an inspiration.
This feature would be synergistic with #59.
There are a few RPC methods from the Ethereum RPC Spec that have not been implemented within Helios yet.
The main ones that might be useful to our users are the following...
The type execution::types::ExecutionBlock is currently not exported, and it is needed to be able to use the get_block_by_number
and get_block_by_hash
functions of the Helios client.
I will submit a PR.
Trying the sample code in the README
leads me to a few compilation errors such as this:
error[E0308]: mismatched types
--> /home/naps62/.cargo/git/checkouts/helios-b3bc464b79507b80/e4071fe/execution/src/evm.rs:256:13
|
255 | Ok(Some(AccountInfo::new(
| ---------------- arguments to this function are incorrect
256 | account.balance,
| ^^^^^^^^^^^^^^^ expected struct `primitive_types::U256`, found struct `ethers::types::U256`
|
= note: struct `ethers::types::U256` and struct `primitive_types::U256` have similar names, but are actually distinct types
I tried overriding ethers
to use the same fork specified in this repo, but that seems to have changed nothing
Finally got it running only to now get this output:
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', /home/eth/helios/execution/src/evm.rs:283:26
need to add openssl back to client, consensus, execution, and (maybe) common
We may need to experiment with removing references to the filesystem in the checkpoint saving logic to make this work. It also looks like ethers-rs is not properly compiling to wasm at the moment. BLST may need some messing around with. Overall its a decent sized endeavor, but will really help deliver on the portability front.
The ethers-rs issue is tracked at gakonst/ethers-rs#1824
We need to do a pretty thorough audit of the error handling throughout the codebase.
unwrap
which will crash the applicationErr
in main
or the application crasheseyre
Currently for merkle trees we use openethereum/parity-ethereum, which is licensed with GPL. Since we want to release this under MIT, we cannot use this library due to its copyleft attribute. I think the best replacement here would be to use paritytech/trie which is licensed under Apache.
Running with helios --execution-rpc https://eth-mainnet.g.alchemy.com/v2/...
results in the following:
[2022-12-15T02:50:59Z INFO client::rpc] rpc server started at 127.0.0.1:8545
Illegal instruction (core dumped)
Implement eth_getBlockTransactionCountByHash
and eth_getBlockTransactionCountByNumber
.
To implement this, I suggest looking at how eth_getBlockByHash
and eth_getBlockByNumber
are implemented. We should be able to service these requests without making any rpc requests, since the ExecutionPayload
that we have already has a transactions
field which we can check the length of.
Some calls are causing a very unexpected failure.
For example, calling renderBroker(5)
on 0x8bb9a8baeec177ae55ac410c429cbbbbb9198cac
always fails on line 287 of evm.rs
. This is very unexpected, since in this case we are just fetching and proving that slot, and immediately fetching the slot out of the map and unwrapping it.
Currently, block numbers are represented as Option<u64>
with None
referring to the latest block. We should change this representation two an enum with types for a specific number, latest, and finalized.
When Helios starts, it needs to fetch a recent beacon blockhash to use as the weak subjectivity checkpoint. If this blockhash is too old (under worst case conditions too old is ~14 days), it is possible for an attacker to trick Helios into following the wrong chain. While this attack is hard to pull off (requires millions in capital to fill the staking deposit and withdrawal queues), we should still check the checkpoint age, and if it is too old, throw and error and tell the user how to fetch a good blockhash.
To do this, use the bootstrap fetched here and check bootstap.header.slot
's age using the expected_current_slot
and slot_timestamp
methods in Consensus
.
If it is older that 14 days, throw an error (consensus errors can be found here)
Currently BeaconBlockBody
includes dummy values for proposer_slashings
, attester_slashings
, and voluntary_exits
. This prevents our beacon block verification from working whenever any of these fields are present.
Right now, the configuration options leave a lot to be desired. There Config
struct basically holds all configuration data, regardless of whether that configuration should be user facing or not. For example, the user probably doesn't care about the slot fork blocks of each hardfork or the genesis validator root, but they almost certainly care about their rpc url or checkpoint block. We should probably separate these concerns.
The interaction between CLI flags and config options are also somewhat poorly structured, and can probably use some refactoring. Along with that, I think we should allow users to set config options using environment variables.
This will allow Tor and Nym support.
I have a draft of this already, but am noticing many RPC providers reject connections coming from non-browsers via tor. Need to see if there is a good way around this.
some of the functionality I see in the repo, e.g. various chain config, a lot of the types under the consensus
crate, or a definition like a Bytes32
, are defined here:
https://github.com/ralexstokes/ethereum-consensus
I'd consider refactoring to leverage this crate to lower maintenance burden for this repo
If this issue gets sufficient engagement, I'm happy to open a PR for the refactoring
We are currently missing a central location to view all RPC methods implemented by Helios. The rpc.rs file defines the methods but we do not have documentation.
golang client make request getBalance:
{"jsonrpc":"2.0","id":1,"method":"eth_getBalance","params":["0x0143bd0cc24d0e1ea46353be5a7972f42abcb175","latest"]}
helios return:
{"jsonrpc":"2.0","result":"0x000000000000000000000000000000000000000000000000000824962cde342e","id":1}
then golang client panic with fatal error:
json: cannot unmarshal hex number with leading zero digits into Go value of type *hexutil.Big
I check others rpc endpoint will return:
{"jsonrpc":"2.0","result":"0x824962cde342e","id":1}
no more freeloaders using my api key
Steps taken :
thread 'rustc' panicked at 'forcing query with already existing DepNode
........
........
note: rustc 1.67.0-nightly (42325c525 2022-11-11) running on x86_64-apple-darwin
note: compiler flags: --crate-type lib -C embed-bitcode=no -C split-debuginfo=unpacked -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [evaluate_obligation] evaluating trait selection obligation for<'a, 'b, 'c, 'd, 'e, 'f, 'g, 'h, 'i> {core::future::ResumeTy, &'a mut execution::evm::Evm<'b, execution::rpc::http_rpc::HttpRpc>, &'c execution::types::CallOpts, &'d execution::evm::Evm<'e, execution::rpc::http_rpc::HttpRpc>, impl core::future::future::Future<Output = core::result::Result<std::collections::hash::map::HashMap<primitive_types::H160, execution::types::Account>, execution::errors::EvmError>>, ()}: core::marker::Send
#1 [typeck] type-checking rpc::<impl at client/src/rpc.rs:112:1: 112:31>::estimate_gas
#2 [typeck_item_bodies] type-checking all item bodies
#3 [analysis] running analysis passes on this crate
end of query stack
error: could not compile client
Certain calls touch large amounts of state across many accounts. Most notably, the Uniswap frontend makes a multicall to read the user's balance of every since token in the default tokenlist, which ends up touching the state of ~200 contracts. When processing this call, the node ends up needing to grab proofs for each different token, which it fetches sequentially as needed during the call. This leads to the call being processed incredibly slowly.
We can use access lists to significantly speed up these kinds of calls. To start, we fetch the access list for the call using the eth_createAccessList
rpc call. From there, we can dispatch a eth_getProof
call for each state access in parallel. As the calls return, we can store them. Once all calls have returned, we can perform the evm computation using the stored values.
One important edge case that we need to handle here is if the evm accesses state that isn't in our storage data structure. This can happen in cases where eth_getAccessList
is not deterministic (block hashes can be used to do this). In these cases, we still need to fetch the proof via rpc.
At least block.timestamp
is always returning zero, but I suspect difficulty, baseFee and a few others are broken as well. I think I need to pipe these into revm somehow.
Right now we are using mutex locks to access the node from the rpc. This is definitely not optimal, since it means we cannot to concurrent reads, and different rpc calls can block eachother. In practice, we very rarely use the lock to perform a write and very often perform reads. We need to research here what the best mechanism to make these concurrent reads work.
We should include sections for installing, contributing, and an in depth description of how everything works.
eth_getBlockByNumber
and eth_getBlockByHash
both take a full_tx
parameter. If it is true, we need to provide the full tx for each transaction in the transaction list rather than just the hash. Right now, we ignore this parameter and assume it is false. This can cause bugs in certain dapps (it seems to break aave).
Right now the core machinery inside consensus/consensus.rs
isn't quite identical to the spec. I chose to rewrite it at the time since I didn't like how the spec structures it, but I think for safety it will be best to perfectly mirror the spec to avoid introducing any vulnerabilities.
We currently don't have any of the rpc methods related to fetching logs implemented. There seems to be a lot of them, and I'm not sure which ones dapps use the most (it seems dapps are using logs less and less nowadays). I think the best plan is to use a bunch of dapps and keep track of which of the log methods they are using and implement them.
This one might be a stretch, but if we can fetch all beaconchain data (updates and beacon blocks) from the p2p network, we could entirely remove the need for an eth2 rpc. Nimbus nodes already gossip all the light client data we need.
For integrating with EL full nodes, we should support calling the engine api, specifically engine_forkchoiceUpdatedV1
, which Akula uses to follow the chain.
Right now, if the client is unable to advance on the beaconchain, all client fetch methods continue to operate on the stale chain state. This is not optimal, since it could lead to unintended behavior (such as parameterizing a uniswap slippage limit with a stale price quote).
To fix, we should check the timestamp of the most recent beacon block, and throw an error if it is too old when users make requests using latest
or finalized
blocknums, or are calling eth_blockNumber
.
Currently Node
stores all historical ExecutionPayload
entries in a hashmap. This mapping has an unbounded size, and as the node runs for a long period of time this can grow to be quite large. We should probably replace this mapping with a more suitable data structure like a queue.
Currently "checkpoints" are not being exported.
So I submitted a PR adding it to exports and any other config module if needed.
We probably want this to be as easy as possible to install. At the very least we should add some CI to cross compile binaries for all major platforms. It might also be a good idea to build something similar to foundryup to automate the install process.
I was experimenting with react-native-helios
and encountered the following discrepancy when calling ethers.provider.getCode()
against the Helios RPC:
const helios = getHeliosProvider(ethereumMainnet);
const alchemy = new ethers.providers.AlchemyProvider(
'mainnet',
apiKey,
);
const address = "0x00000000006c3852cbEf3e08E8dF289169EdE581";
const alchemyCode = await alchemy.getCode(address); // 0x60806040526004361015610013575b6...
const heliosCode = await helios.getCode(address); // 60806040526004361015610013575b6... (invalid hexlify value)
It looks like we're not returning with the expected hexadecimal prefix. If I add 0x
to the response returned by Helios, the two responses are identical.
I'm running helios at 4c72344b55991b6296ccbb12b3c9e3ad634d593e
; I'll bring this up to latest to see if the issue persists. π
Hi! I noticed the shared_prefix_length function in the proofs module compares the paths from the proof and the path we're verifying, but it doesn't exit early if a nibble that doesn't match is found:
https://github.com/a16z/helios/blob/master/execution/src/proof.rs#L136-L138
For instance, when comparing 0xabcd
with 0xabfd
it will return a shared prefix length of 3 instead of 2.
This means a proof could actually be showing a divergent path, but because some nibbles match at the end of the path, the verifier will advance to later nibbles and potentially validate an invalid proof (though this is probably highly impractical to exploit in a real-world account or storage proof).
Moreover, as we're walking along the proof, if such a divergent path were encountered, the verifier should reject the proof if this is not the last node.
(I noticed this while comparing the verifier implementation to https://github.com/lidofinance/curve-merkle-oracle/blob/main/contracts/MerklePatriciaProofVerifier.sol#L74-L95 as it has a similar structure)
I have a fix for this ready in https://github.com/pcarranzav/helios/tree/fix-shared-prefix-length and will PR right after posting this (my laptop just seems to be taking ages to install the dependencies so that I can check the tests run properly...)
Reported by Mason. I can't repro this consistently, but have had this happen in the past. A nice solution might be to follow what some other nodes do, and allow the user to ctrl-c multiple times in a row to force exit the application.
Hey y'all! Found your project through beerus, a Starknet light client that leverages helios.
I'm trying to understand the interaction flow that occurs in a helios light client and thought that a simple diagram (excalidraw-like) could help!
something in the spirit of
Thank you all for your help!
We need CLI flags to set the execution and consensus rpc urls.
We probably want a single top level crate that re-exports most of the functionality, while hiding a lot of the internals better. This means that most people who want to consume helios can then just import one crate rather than a bunch of the required crates in this workspace. We can also hide a lot of functionality that users don't need here. If anyone does happen to need it (such as some of the internal consensus stuff), they are free to import any of the individual packages.
Using 0x85e6151a246e8fdba36db27a0c7678a575346272fe978c9281e13a8b26cdfa68
for example causes an RpcError
on fetching the bootstrap, immediately followed by an InvalidPeriod
error. It seems other checkpoints in this sync period are failing as well.
Logs provided by Mason. I am still unable to produce locally though.
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Too many open files (os error 24)
Location:
/Users/runner/work/lightclient/lightclient/execution/src/evm.rs:89:27', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Too many open files (os error 24)
Location:
/Users/runner/work/lightclient/lightclient/execution/src/evm.rs:89:27', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: Too many open files (os error 24)
Location:
/Users/runner/work/lightclient/lightclient/execution/src/evm.rs:89:27', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:73:20', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:73:20', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:73:20', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:107:47
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:53:02Z INFO consensus::consensus] applying optimistic update slot=4756462 confidence=98.05% delay=00:00:15
[2022-09-23T02:53:05Z WARN client::rpc] Too many open files (os error 24)
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:53:41Z INFO consensus::consensus] applying optimistic update slot=4756465 confidence=99.22% delay=00:00:18
[2022-09-23T02:53:43Z WARN client::rpc] Too many open files (os error 24)
[2022-09-23T02:55:12Z INFO consensus::consensus] applying optimistic update slot=4756472 confidence=98.83% delay=00:00:25
[2022-09-23T02:55:22Z WARN client::rpc] (code: -32000, message: already known, data: None)
[2022-09-23T02:55:24Z INFO consensus::consensus] applying optimistic update slot=4756474 confidence=99.41% delay=00:00:13
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[2022-09-23T02:56:03Z INFO consensus::consensus] applying optimistic update slot=4756477 confidence=99.02% delay=00:00:16
[2022-09-23T02:56:05Z WARN client::rpc] (code: -32000, message: nonce too low, data: None)
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:56:48Z INFO consensus::consensus] applying finality update slot=4756416 confidence=83.01% delay=00:13:793
[2022-09-23T02:56:49Z INFO consensus::consensus] applying optimistic update slot=4756480 confidence=83.01% delay=00:00:26
[2022-09-23T02:56:51Z WARN client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
[2022-09-23T02:56:51Z WARN client::rpc] error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: error sending request for url (https://eth-mainnet.g.alchemy.com/v2/Q0BqQPbTQfSMzrCNl4x80XS_PLLB1RNf): error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
Caused by:
0: error trying to connect: dns error: failed to lookup address information: nodename nor servname provided, or not known
1: dns error: failed to lookup address information: nodename nor servname provided, or not known
2: failed to lookup address information: nodename nor servname provided, or not known
Location:
execution/src/rpc/http_rpc.rs:51:30', /Users/runner/work/lightclient/lightclient/execution/src/evm.rs:111:46
[2022-09-23T02:57:39Z INFO consensus::consensus] applying optimistic update slot=4756485 confidence=97.85% delay=00:00:16
[2022-09-23T02:57:41Z WARN client::rpc] Too many open files (os error 24)
Other EVM chains may be able to work with Helios. Some chains may have viable light sync protocols that we can swap out for, and some L2s may be workable. We should look into the following.
Tried installing on Ubuntu 18.04 following the instructions for heliosup
, but attempting to run the resulting helios
binary faced with the following errors:
helios: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by helios)
helios: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by helios)
(Ubuntu 18.04 has glibc 2.27.)
Normally this would happen because the build was performed on a newer operating system. Building on an old one should give better compatibility. Alternatively, statically linking a libc (by targeting musl) would avoid this issue.
When the node shuts down, we should write a suitable checkpoint block to a file for later use. Otherwise it's possible that people will use very old checkpoints that are outside of the weak subjectivity period.
Following #145, additional types need to be exported in order to use the helios lightclient externally.
I will submit a PR proposing to export the remaining types, which will enable the use of the client functions from external locations.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.