Giter Site home page Giter Site logo

simple-taiko-node's People

Contributors

0x4r45h avatar 2manslkh avatar blankerl avatar boroshk avatar cyberhorsey avatar d1onys1us avatar dantaik avatar davaymne avatar davidcardenasus avatar davidtaikocha avatar dimmvvv avatar dvjromashkin avatar ex-scxr avatar fz3n avatar joaolago1113 avatar marcuswentz avatar martonp avatar mattsu2020 avatar nekokoban-kasoutuu avatar ondkloss avatar otomarukanta avatar papadritta avatar serverum avatar spachib avatar tudorpintea999 avatar wolfderechter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

simple-taiko-node's Issues

fix: L2_SUGGESTED_FEE_RECIPIENT: NOT SET

In case of facing this issue when .env file changes wont reflect in node setting this shell script could double check the correctness of address input. users experienced the issue in which the node could not recognize their evm address while it was set on the env file.

# Add address here
address= ${L2_SUGGESTED_FEE_RECIPIENT}

# Remove '0x' and convert to lowercase
address=$(echo "$address" | sed 's/^0x//;s/.\{1,\}/\L&/')


hash=$(echo -n "$address" | echo -n -e "\x$address" | openssl dgst -sha3-256 -binary | xxd -p)

checksum="0x"

for ((i=0; i<${#address}; i++)); do
  char="${address:$i:1}"
  hash_char="${hash:$i:1}"

  # Convert to uppercase if the corresponding hash character is greater than or equal to 8
  if (( 0x$hash_char >= 8 )); then
    checksum+=$(echo "$char" | tr '[:lower:]' '[:upper:]')
  else
    checksum+="$char"
  fi
done

echo "$checksum"

Process new L1 blocks error="execution reverted"

Hi All,

When I run node with ENABLE_PROVER to true configuration, I get the following error:

" Process new L1 blocks error="execution reverted".
Served taiko_headL1Origin reqid=951 duration="118.344µs" err="not found". "

With this error I look up a lot but it is not clear what the error is. Thank you.

Required flag "l2.suggestedFeeRecipient" not set

taiko_client_proposer_1        | Required flag "l2.suggestedFeeRecipient" not set
simple-taiko-node_taiko_client_proposer_1 exited with code 1

All other containers working fine

ENABLE_PROPOSER=true
L1_PROPOSER_PRIVATE_KEY=ee391aefc2...........daee6f998cd99d62
L2_SUGGESTED_FEE_RECIPIENT=0xd8011.........F7c49890108B35b8003E2

Doing a proof: assertion failed: `(left == right)`

Greetings! I'm getting an assertion error while trying to run a proof method on prover_rpcd.

[2024-01-22T08:10:42Z INFO  prover::shared_state] task_result: Err(
        "assertion failed: `(left == right)`\n  left: `0`,\n right: `32`",
    )

I first got it while running the prover_cmd binary that I built in from the zkevm-chain repository (both v0.6.0-alpha and deploy-debug branches. My environment vars looked like this

export PROVERD_BLOCK_NUM=4500 && \
export PROVERD_PARAMS_PATH=kzg_bn254_22.srs && \
export PROVERD_RPC_URL=http://35.195.113.51:8547

Then I tried the prover_rpcd binary, and got the same issue. I used this command to launch it:

./prover_rpcd --bind 0.0.0.0:9000

The curl POST command (no problems with the ProofRequestOptions struct):

curl http://0.0.0.0:9000 -X POST -H "Content-Type: application/json" --data \
'{"jsonrpc": "2.0", "method": "proof","params": [{"circuit": "super","block": 4500, "rpc": "http://35.195.113.51:8547","protocol_instance":{"l1_signal_service": "","l2_signal_service": "","l2_contract": "","request_meta_data":{"id": 0,"timestamp": 0,"l1_height": 0,"l1_hash": "","deposits_hash": "","blob_hash": "","tx_list_byte_offset": 0,"tx_list_byte_size": 0,"gas_limit": 0,"coinbase": "","difficulty": "","extra_data": "","min_tier": 0,"blob_used": false,"parent_metahash": ""},"block_hash": "","parent_hash": "","signal_root": "","graffiti": "","prover": "","treasury": "","gas_used": 0,"parent_gas_used": 0,"block_max_gas_limit": 0,"max_transactions_per_block": 0,"max_bytes_per_tx_list": 0,"anchor_gas_limit": 0},"retry": false,"param":"kzg_bn254_22.srs","mock": false,"aggregate": false,"mock_feedback": false,"verify_proof": false}],"id": 1}'

Thinking it might be a build problem (incorrect sources), I extracted prover_rpcd from the latest docker image.

sudo docker save gcr.io/evmchain/katla-proverd:latest > proverd.tar 

I got this file:

-rwxr-xr-x 1 ader ader 22679560 Jän 22 03:10 prover_rpcd

I still have the same error. The output is this:

$ ./prover_rpcd --bind 0.0.0.0:9000
[2024-01-22T08:10:35Z INFO  prover::server] Listening on http://0.0.0.0:9000
[2024-01-22T08:10:41Z INFO  prover::shared_state] compute_proof: ProofRequestOptions {
        circuit: "super",
        block: 4500,
        rpc: "http://35.195.113.51:8547",
        protocol_instance: RequestExtraInstance {
            l1_signal_service: "",
            l2_signal_service: "",
            l2_contract: "",
            request_meta_data: RequestMetaData {
                id: 0,
                timestamp: 0,
                l1_height: 0,
                l1_hash: "",
                deposits_hash: "",
                blob_hash: "",
                tx_list_byte_offset: 0,
                tx_list_byte_size: 0,
                gas_limit: 0,
                coinbase: "",
                difficulty: "",
                extra_data: "",
                min_tier: 0,
                blob_used: false,
                parent_metahash: "",
            },
            block_hash: "",
            parent_hash: "",
            signal_root: "",
            graffiti: "",
            prover: "",
            treasury: "",
            gas_used: 0,
            parent_gas_used: 0,
            block_max_gas_limit: 0,
            max_transactions_per_block: 0,
            max_bytes_per_tx_list: 0,
            anchor_gas_limit: 0,
        },
        retry: false,
        param: Some(
            "kzg_bn254_22.srs",
        ),
        mock: false,
        aggregate: false,
        mock_feedback: false,
        verify_proof: false,
    }
thread 'tokio-runtime-worker' panicked at 'assertion failed: `(left == right)`
  left: `0`,
 right: `32`', /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/primitive-types-0.12.2/src/lib.rs:60:1
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[2024-01-22T08:10:42Z INFO  prover::shared_state] task_result: Err(
        "assertion failed: `(left == right)`\n  left: `0`,\n right: `32`",
    )

I can query the contents of block 4500 (0x1194) with no problems.

curl http://35.195.113.51:8545 -X POST -H "Content-Type: application/json" --data '{"method":"eth_getBlockByNumber","params":["0x1194", true],"id":1,"jsonrpc":"2.0"}'

Lastly, the status in Grafana (both L2 executation engine and Holesky L1) look fine.

Thank you!

Graceful shutdown

Currently the shell scripts don't handle graceful shutdown so the programs are forcefully exited.
I recommend adding exec and interrupt handlers.

TLS handshake error on L3

Any suggestions on how to fix the below error? I'm getting it during running L3 proposer/prover. I have fully synced L2 taiko node.

simple-taiko-node-l3_taiko_client_proposer-1        | ERROR[07-25|07:22:33.143] Dial ethclient error                     url=wss://161.97.73.230:8548 error="tls: first record does not look like a TLS handshake"

Node synchronization issue

Description:

My node was running smoothly until 4 days ago. It seems to have stopped syncing since block 841917. I noticed the problem yesterday and began my investigations this morning.

It would be a good idea to investigate what may have caused this error.

Symptoms:

Upon node restart, I observed the following error message in the logs :

" ! zkevm_chain_prover_rpcd Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."

Steps Already Taken:

Checked disk space with df -h
Checked ports with netstat -tuln
Monitored network usage with iftop
Checked resource usage with htop and free -m
Changed my RPC endpoint and restarted the node

Observations:

After changing my RPC endpoint and restarting the node, it seems to work. However, I find this odd given my VPS's capabilities and the checks I've performed.

Additional Information:

VPS: [VPS details - CLOUD VPS XL Contabo with 10 vCPU Cores, 60 GB RAM]

Docker image

! zkevm_chain_prover_rpcd The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested 0.0s

The script is attempting to run a Docker image that was built for the amd64 (x86_64) architecture, but my host is based on the arm64 (ARMv8) architecture, which has resulted in compatibility issues. This is because Docker images are built for specific hardware architectures (such as amd64, arm64) and cannot run across architectures. Could you add a Docker image that supports the arm64 (ARMv8) architecture, please?

Katla client error="no contract code at given address"

Greetings!

I'm having the same problem as this issue last April: #47

I am running latest simple-taiko-node code: main branch, commit b0a2648

The client driver log reports this:
simple-taiko-node-taiko_client_driver-1 | no contract code at given address

Grafana is working fine. JWT secret is found (had to run second time, and not destroy volume on docker compose down)

I am running a Holesky L1 node on same machine.

The Katla endpoints are set thusly, the websocket connection is happening. These are the only deltas from .env.sample

diff .env .env.sample 
30,31c30,31
< L1_ENDPOINT_HTTP=http://10.132.0.8:8545
< L1_ENDPOINT_WS=ws://10.132.0.8:8546
---
> L1_ENDPOINT_HTTP=
> L1_ENDPOINT_WS=

The contracts in the .env

TAIKO_L1_ADDRESS=0xB20BB9105e007Bd3E0F73d63D4D3dA2c8f736b77
TAIKO_TOKEN_L1_ADDRESS=0x8C5ac30834D3f85a66B1D19333232bB0a9ca2Db0
ASSIGNMENT_HOOK_L1_ADDRESS=0x41e574f051Bd887024B4dEe2a7F684D6936c4488
TAIKO_L2_ADDRESS=0x1670080000000000000000000000000000010001

I do see the contracts are different in the alpha-6 branch. The instruction here: https://docs.taiko.xyz/guides/run-a-taiko-node/, do not mention switching to another branch.

JSON-RPC via curl works for both nodes, e.g. with method eth_blockNumber. L1 returns something reasonable, and Katla returns 0.

Thank you!

Checking Eldfell L3 node logs

I can check any container for L2 node. For example:
docker compose logs -f taiko_client_prover_relayer
But trying to check it for L3 give: 'no such service: l3_taiko_client_prover_relayer'

Docker-Compose command not running

After attaching tmux session and cd simple-taiko-node.
sudo docker-compose up -d giving error

ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.l2_execution_engine: 'pull_policy'
Unsupported config option for services.taiko_client_driver: 'pull_policy'
Unsupported config option for services.taiko_client_proposer: 'pull_policy'
Unsupported config option for services.taiko_client_prover_relayer: 'pull_policy'
Unsupported config option for services.zkevm_chain_prover_rpcd: 'pull_policy'

Screenshot 2024-01-24 at 4 10 50 PM

Critical Error while starting node

CRIT [04-16|09:05:07.275] Failed to start Taiko client error="invalid L1 prover private key: invalid length, need 256 bits"

Morning. I keep getting error above in logs while starting Taiko node. Are there any problems with node right now?

Node Synchronization

Describe the bug

It has been observed that some nodes do not synchronize and the chain head on Grafana is shown as flatlined with no progress. Sometimes a restart of the node achieves some progress but not catching up with the chain.
kernel 65 a
kernel 65 b
node kernel 5 15

I have been trying to identify the reason working in parallel with three nodes and using the same RPCs for all three. You can see the examples attached. The two flatlined nodes are running on two different servers both with Ubuntu Ubuntu 22.04.3 LTS (GNU/Linux 6.5.0-14-generic x86_64). The correct chain head is running on a server with Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-25-generic x86_64).

Steps to reproduce

After identifying the possible issue and in an effort to double check the observation, I have downgraded one of the 6.5 kernel servers to kernel 5.15 and did a fresh install of docker, git and simple-taiko-node. The node started syncing and is achieving sync to the current block-required a couple of restarts though. Based on this analysis there is a possibility that nodes are not synchronizing due to an issue with kernel versions. This is an assumption based on observations only and may or may not be correct. However, I hope that this will assist the Taiko team to take the project a step further. Thank you

Additional context

Additional context here.

Difficulties integrating Taiko scripts with continuous integration (CI) pipelines.

Problem: Many of the users face challenges whilst trying to integrate Taiko scripts seamlessly into their CI pipelines, main to inefficiencies within the test workflow and hindering automation efforts.

Solution: Develop a complete manual in the Taiko documentation that outlines pleasant practices for integrating Taiko scripts with popular CI equipment including Jenkins, Travis CI, or GitHub Actions. Provide targeted instructions, pattern configuration files, and troubleshooting suggestions to facilitate easy integration and make certain reliable execution of Taiko exams inside CI environments.

Server hardware configuration requirement

What config is required for launching the Taiko node?
I saw that Optimism has this recommended:

Recommended Hardware
16GB+ RAM
500GB+ disk (HDD works for now, SSD is better)
10mb/s+ download

aarch64 support?

hello its working fine with othr containers but only 1 doesnt support arm. anyway this can be fixed?

e6ddbad1cd89 gcr.io/evmchain/katla-proverd:latest "/bin/sh -c /script/…" 2 hours ago Restarting (1) 14 seconds ago simple-taiko-node-zkevm_chain_prover_rpcd-1

Git clone issue

Running the command git clone [email protected]:taikoxyz/simple-taiko-node.git listed in the readme gives me the following error:

Cloning into 'simple-taiko-node'...
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

Using git clone https://github.com/taikoxyz/simple-taiko-node.git works fine

failed to insert new head to L2 execution engine

simple-taiko-node-taiko_client_driver-1          | ERROR[06-08|12:39:58.631] Block batch iterator callback error      error="failed to insert new head to L2 execution engine: missing trie node 49c392582ec938adb8a9823446d90cc0418e576e1a19d9f9fddc08961e016a05 (path ) <nil>"

When running simple-taiko-node and restarting it, it starts showing this error and prover doesn't work anymore. The only way to fix it is to delete volumes and start again.

Possible cause is docker forcefully stopping it after 10 seconds

Incorrect Chain ID in .env file for Jolnir L2 in GitHub Repository

Hello,

I was following the documentation provided on https://taiko.xyz/docs/guides/run-a-taiko-node to set up a Taiko node. According to the documentation, I copied the .env.sample file to a new file named .env using the command cp .env.sample .env. After initiating my node, I noticed that the block synchronization did not match the block numbers available on the explorer.

Seeking help, I visited the project's Discord channel. However, I decided to dig deeper into the issue by myself. Upon reviewing the contents of the .env file, I realized that the Chain ID specified for the Jolnir L2 was 1670005 (Grimsvotn) which seems to be incorrect. The correct Chain ID should be 167007 (Eldfell). This discrepancy was not mentioned anywhere in the documentation, which could lead to confusion and improper node setup.

I believe an amendment in either the documentation or the .env.sample file regarding the correct Chain ID for Jolnir L2 will be beneficial for future node setups.

Thank you for your attention to this matter. I look forward to any updates that may rectify this issue.

Best regards,

using different HTTPS and WS ports

Running a local holesky node, changed the HTTP port from 8545 to 8945 and the WS port from 8546 to 8946 (port 8545 is already taken). The problem is running the taiko node. Changed the ports in the .env file. I can log in to grafana and there is some traffic and peer connections but no chain activity.

docker compose logs
prometheus-1 | ts=2024-02-09T02:06:50.524Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d
prometheus-1 | ts=2024-02-09T02:06:50.526Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)"
prometheus-1 | ts=2024-02-09T02:06:50.526Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)"
prometheus-1 | ts=2024-02-09T02:06:50.526Z caller=main.go:594 level=info host_details="(Linux 5.4.0-105-generic #119-Ubuntu SMP Mon Mar 7 18:49:24 UTC 2022 x86_64 9b0deedf7196 (none))"
prometheus-1 | ts=2024-02-09T02:06:50.526Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)"
prometheus-1 | ts=2024-02-09T02:06:50.526Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)"
prometheus-1 | ts=2024-02-09T02:06:50.531Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus-1 | ts=2024-02-09T02:06:50.532Z caller=main.go:1039 level=info msg="Starting TSDB ..."
prometheus-1 | ts=2024-02-09T02:06:50.535Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090
prometheus-1 | ts=2024-02-09T02:06:50.536Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090
prometheus-1 | ts=2024-02-09T02:06:50.560Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
prometheus-1 | ts=2024-02-09T02:06:50.567Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=6.631593ms
prometheus-1 | ts=2024-02-09T02:06:50.568Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while"
prometheus-1 | ts=2024-02-09T02:06:50.627Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=3
prometheus-1 | ts=2024-02-09T02:06:50.632Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=3
prometheus-1 | ts=2024-02-09T02:06:50.661Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=3
prometheus-1 | ts=2024-02-09T02:06:50.661Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=3
prometheus-1 | ts=2024-02-09T02:06:50.661Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=131.441µs wal_replay_duration=93.196189ms wbl_replay_duration=271ns total_replay_duration=100.408213ms
prometheus-1 | ts=2024-02-09T02:06:50.665Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC
prometheus-1 | ts=2024-02-09T02:06:50.665Z caller=main.go:1063 level=info msg="TSDB started"
prometheus-1 | ts=2024-02-09T02:06:50.665Z caller=main.go:1064 level=debug msg="TSDB options" MinBlockDuration=2h MaxBlockDuration=1d12h MaxBytes=0B NoLockfile=false RetentionDuration=15d WALSegmentSize=0B WALCompression=true
prometheus-1 | ts=2024-02-09T02:06:50.665Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus-1 | ts=2024-02-09T02:06:50.668Z caller=manager.go:194 level=debug component="discovery manager scrape" msg="Starting provider" provider=static/0 subs=[geth]
prometheus-1 | ts=2024-02-09T02:06:50.668Z caller=manager.go:212 level=debug component="discovery manager scrape" msg="Discoverer channel closed" provider=static/0
prometheus-1 | ts=2024-02-09T02:06:50.668Z caller=manager.go:194 level=debug component="discovery manager notify" msg="Starting provider" provider=static/0 subs=[config-0]
prometheus-1 | ts=2024-02-09T02:06:50.669Z caller=manager.go:212 level=debug component="discovery manager notify" msg="Discoverer channel closed" provider=static/0
prometheus-1 | ts=2024-02-09T02:06:50.669Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.683601ms db_storage=65.881µs remote_storage=2.344µs web_handler=970ns query_engine=1.63µs scrape=2.405806ms scrape_sd=318.102µs notify=211.81µs notify_sd=165.619µs rules=3.781µs tracing=23.725µs
prometheus-1 | ts=2024-02-09T02:06:50.669Z caller=main.go:1024 level=info msg="Server is ready to receive web requests."
prometheus-1 | ts=2024-02-09T02:06:50.669Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..."
taiko_client_driver-1 | Post "http://172.18.0.1:8946": dial tcp 172.18.0.1:8946: i/o timeout
taiko_client_driver-1 | Post "http://172.18.0.1:8946": dial tcp 172.18.0.1:8946: i/o timeout
taiko_client_driver-1 | Post "http://172.18.0.1:8946": dial tcp 172.18.0.1:8946: i/o timeout
taiko_client_driver-1 | Post "http://172.18.0.1:8946": dial tcp 172.18.0.1:8946: i/o timeout
taiko_client_driver-1 | Post "http://172.18.0.1:8946": dial tcp 172.18.0.1:8946: i/o timeout
taiko_client_driver-1 | Post "http://172.18.0.1:8946": dial tcp 172.18.0.1:8946: i/o timeout
taiko_client_driver-1 | Post "http://172.18.0.1:8946": dial tcp 172.18.0.1:8946: i/o timeout
taiko_client_proposer-1 | PROPOSER IS DISABLED
l2_execution_engine-1 | INFO [02-09|02:06:49.131] Enabling metrics collection
l2_execution_engine-1 | INFO [02-09|02:06:49.131] Enabling stand-alone metrics HTTP endpoint address=0.0.0.0:6060
l2_execution_engine-1 | INFO [02-09|02:06:49.131] Starting metrics server addr=http://0.0.0.0:6060/debug/metrics
l2_execution_engine-1 | INFO [02-09|02:06:49.142] Maximum peer count ETH=50 total=50
l2_execution_engine-1 | INFO [02-09|02:06:49.152] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
l2_execution_engine-1 | INFO [02-09|02:06:49.157] Enabling recording of key preimages since archive mode is used
l2_execution_engine-1 | WARN [02-09|02:06:49.157] Disabled transaction unindexing for archive node
l2_execution_engine-1 | INFO [02-09|02:06:49.158] Set global gas cap cap=50,000,000
l2_execution_engine-1 | INFO [02-09|02:06:49.165] Initializing the KZG library backend=gokzg
l2_execution_engine-1 | INFO [02-09|02:06:49.258] Allocated trie memory caches clean=307.00MiB dirty=0.00B
l2_execution_engine-1 | INFO [02-09|02:06:49.259] Using pebble as the backing database
l2_execution_engine-1 | INFO [02-09|02:06:49.259] Allocated cache and file handles database=/data/taiko-geth/geth/chaindata cache=512.00MiB handles=524,288
l2_execution_engine-1 | INFO [02-09|02:06:49.271] Opened ancient database database=/data/taiko-geth/geth/chaindata/ancient/chain readonly=false
l2_execution_engine-1 | INFO [02-09|02:06:49.272] State scheme set to already existing scheme=hash
l2_execution_engine-1 | INFO [02-09|02:06:49.276] Initialising Ethereum protocol network=167,008 dbversion=8
l2_execution_engine-1 | INFO [02-09|02:06:49.286]
l2_execution_engine-1 | INFO [02-09|02:06:49.286] ---------------------------------------------------------------------------------------------------------------------------------------------------------
l2_execution_engine-1 | INFO [02-09|02:06:49.286] Chain ID: 167008 (Taiko Alpha-6 L2 (Katla))
l2_execution_engine-1 | INFO [02-09|02:06:49.286] Consensus: Taiko
l2_execution_engine-1 | INFO [02-09|02:06:49.286]
l2_execution_engine-1 | INFO [02-09|02:06:49.286] Pre-Merge hard forks (block based):
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Homestead: #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/homestead.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Tangerine Whistle (EIP 150): #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/tangerine-whistle.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Spurious Dragon/1 (EIP 155): #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/spurious-dragon.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Spurious Dragon/2 (EIP 158): #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/spurious-dragon.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Byzantium: #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/byzantium.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Constantinople: #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/constantinople.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Petersburg: #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/petersburg.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Istanbul: #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/istanbul.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Berlin: #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/berlin.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - London: #0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/london.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286]
l2_execution_engine-1 | INFO [02-09|02:06:49.286] Merge configured:
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Hard-fork specification: https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/paris.md
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Network known to be merged: true
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Total terminal difficulty: 0
l2_execution_engine-1 | INFO [02-09|02:06:49.286]
l2_execution_engine-1 | INFO [02-09|02:06:49.286] Post-Merge hard forks (timestamp based):
l2_execution_engine-1 | INFO [02-09|02:06:49.286] - Shanghai: @0 (https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/shanghai.md)
l2_execution_engine-1 | INFO [02-09|02:06:49.286]
l2_execution_engine-1 | INFO [02-09|02:06:49.286] ---------------------------------------------------------------------------------------------------------------------------------------------------------
l2_execution_engine-1 | INFO [02-09|02:06:49.286]
grafana-1 | logger=local.finder t=2024-02-09T02:06:51.656090384Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
grafana-1 | logger=provisioning.plugins t=2024-02-09T02:06:51.763845829Z level=error msg="Failed to read plugin provisioning files from directory" path=/etc/grafana/custom/provisioning/plugins error="open /etc/grafana/custom/provisioning/plugins: no such file or directory"
grafana-1 | logger=provisioning.notifiers t=2024-02-09T02:06:51.763921702Z level=error msg="Can't read alert notification provisioning files from directory" path=/etc/grafana/custom/provisioning/notifiers error="open /etc/grafana/custom/provisioning/notifiers: no such file or directory"
grafana-1 | logger=provisioning.alerting t=2024-02-09T02:06:51.763954367Z level=error msg="can't read alerting provisioning files from directory" path=/etc/grafana/custom/provisioning/alerting error="open /etc/grafana/custom/provisioning/alerting: no such file or directory"
l2_execution_engine-1 | INFO [02-09|02:06:49.287] Loaded most recent local block number=0 hash=7be2de..ea680e td=0 age=54y10mo3w
l2_execution_engine-1 | INFO [02-09|02:06:49.287] Initialized transaction indexer range="entire chain"
l2_execution_engine-1 | INFO [02-09|02:06:49.288] Loaded local transaction journal transactions=0 dropped=0
l2_execution_engine-1 | INFO [02-09|02:06:49.288] Regenerated local transaction journal transactions=0 accounts=0
l2_execution_engine-1 | INFO [02-09|02:06:49.355] Chain post-merge, sync via beacon client
l2_execution_engine-1 | INFO [02-09|02:06:49.356] Gasprice oracle is ignoring threshold set threshold=2
l2_execution_engine-1 | WARN [02-09|02:06:49.360] Engine API enabled protocol=eth
l2_execution_engine-1 | INFO [02-09|02:06:49.360] Starting peer-to-peer node instance=Geth/v1.13.11-stable/linux-amd64/go1.21.7
l2_execution_engine-1 | INFO [02-09|02:06:49.478] New local node record seq=1,707,433,979,340 id=85f496e8d1c5231a ip=127.0.0.1 udp=30303 tcp=30303
l2_execution_engine-1 | INFO [02-09|02:06:49.481] IPC endpoint opened url=/data/taiko-geth/geth.ipc
l2_execution_engine-1 | INFO [02-09|02:06:49.481] Started P2P networking self=enode://8be029de9b1046a48f6f38bcd1ffc69c36d6062d1d25da75aaa83d20821938f2033fba6cafeb5fd113fa3a854ac81bab71d57aa25ad70f34f4f47c3914d63a02@127.0.0.1:30303
l2_execution_engine-1 | INFO [02-09|02:06:49.483] Loaded JWT secret file path=/data/taiko-geth/geth/jwtsecret crc32=0xe45f4da7
l2_execution_engine-1 | INFO [02-09|02:06:49.484] HTTP server started endpoint=[::]:8545 auth=false prefix= cors= vhosts=*
l2_execution_engine-1 | INFO [02-09|02:06:49.484] WebSocket enabled url=ws://[::]:8546
l2_execution_engine-1 | INFO [02-09|02:06:49.485] WebSocket enabled url=ws://[::]:8551
l2_execution_engine-1 | INFO [02-09|02:06:49.485] HTTP server started endpoint=[::]:8551 auth=true prefix= cors=localhost vhosts=*
l2_execution_engine-1 | INFO [02-09|02:06:52.159] New local node record seq=1,707,433,979,341 id=85f496e8d1c5231a ip=173.212.251.148 udp=30303 tcp=30303
l2_execution_engine-1 | INFO [02-09|02:06:59.789] Looking for peers peercount=2 tried=191 static=0
l2_execution_engine-1 | WARN [02-09|02:07:24.362] Post-merge network, but no beacon client seen. Please launch one to follow the chain!

I can't start the Taiko Node with a Local Holesky RPC

Hi, i'm trying to start a Taiko Node with a VPC with a Holesky Node with Archive activated.
i receive in the different test i made those error from the Taiko Node:
taiko_client_driver-1 | dial tcp 192.168.160.1:8546: i/o timeout
or
error with RPC endpoint: node (ws://192.168.160.1:8546) must be archive node

what i can do? the Holesky node have activated the ARCHIVE_NODE=true.

thanks

Failed to assign prover

try almost prover endpoint from Prover market, nothing works

taiko_client_proposer_1        | INFO [12-13|03:58:36.332] Attempting to assign prover              endpoint=http://213.133.100.172:9876 fee=1 expiry=1,702,441,716
taiko_client_proposer_1        | WARN [12-13|03:58:36.713] Failed to assign prover                  endpoint=http://213.133.100.172:9876 error="unsuccessful response 422"
taiko_client_proposer_1        | INFO [12-13|03:58:36.713] Attempting to assign prover              endpoint=http://109.123.252.29:9876  fee=1 expiry=1,702,441,716
taiko_client_proposer_1        | WARN [12-13|03:58:36.742] Failed to assign prover                  endpoint=http://109.123.252.29:9876  error="unsuccessful response 422"
taiko_client_proposer_1        | INFO [12-13|03:58:36.742] Attempting to assign prover              endpoint=http://107.167.95.10:9876   fee=1 expiry=1,702,441,716
taiko_client_proposer_1        | WARN [12-13|03:58:37.023] Failed to assign prover                  endpoint=http://107.167.95.10:9876   error="unsuccessful response 422"
taiko_client_proposer_1        | INFO [12-13|03:58:37.024] Attempting to assign prover              endpoint=http://213.133.100.172:9876 fee=1 expiry=1,702,441,716
taiko_client_proposer_1        | WARN [12-13|03:58:37.418] Failed to assign prover                  endpoint=http://213.133.100.172:9876 error="unsuccessful response 422"
taiko_client_proposer_1        | INFO [12-13|03:58:37.418] Attempting to assign prover              endpoint=http://109.123.252.29:9876  fee=1 expiry=1,702,441,716
taiko_client_proposer_1        | WARN [12-13|03:58:37.468] Failed to assign prover                  endpoint=http://109.123.252.29:9876  error="unsuccessful response 422"
taiko_client_proposer_1        | INFO [12-13|03:58:37.468] Attempting to assign prover              endpoint=http://107.167.95.10:9876   fee=1 expiry=1,702,441,717
taiko_client_proposer_1        | WARN [12-13|03:58:37.782] Failed to assign prover                  endpoint=http://107.167.95.10:9876   error="unsuccessful response 422"
taiko_client_proposer_1        | INFO [12-13|03:58:37.783] Attempting to assign prover              endpoint=http://109.123.252.29:9876  fee=1 expiry=1,702,441,717
taiko_client_proposer_1        | WARN [12-13|03:58:37.860] Failed to assign prover                  endpoint=http://109.123.252.29:9876  error="unsuccessful response 422"
taiko_client_proposer_1        | INFO [12-13|03:58:37.860] Attempting to assign prover              endpoint=http://213.133.100.172:9876 fee=1 expiry=1,702,441,717

Enabling proposer fails with "The provided prover endpoint is not functional"

Hey,

Got my node synced up and wanted to enable proposer.

Ran:
stn config proposer

✔ The node is currently not configured as a proposer. Would you like to enable it? · yes
✔ Select a prover endpoint configuration: · Enter a prover endpoint
✔ Please enter your desired prover endpoint · http://taiko-a6-prover.zkpool.io:9876
The provided prover endpoint is not functional.
Failed to initialize or update proposer configuration.
No changes made to proposer configuration.

Why is that?

The documentation at https://docs.taiko.xyz/guides/enable-a-proposer/ only says to run the stn config proposer command but doesn't provide instructions for the endpoint but I assume it's the same as for simple-taiko-node so I got the prover endpoint from: https://docs.taiko.xyz/resources/prover-marketplace

Docs: remove daemon flag by default

It seems like people are running into issues when they run the -d daemon flag. So perhaps adding a side note of it and removing it of the default command?

Dial ethclient error: connection refused to L2 WS

Hello! I'm having a lot of issues syncing my L2 to a local Holesky L1.

After doing docker compose down && docker compose up -d && docker compose logs -f, I managed to stop an error when L2 is just starting:
image

L1_ENDPOINT_HTTP=http://LOCAL_VPS_IP:8545
L1_ENDPOINT_WS=ws://LOCAL_VPS_IP:8546

My .env file looks like this, but I'm also tried with 172.17.0.1 and host.docker.internal but to no avail.

Instances/Parallelization

Hello, guys. I've seen that my taiko node didn't utilise full cpu to make zk proofs(32/32/300 not to mention all requirments is overkill tbh). It acts giving short cpu spikes. Is there any way i can use many instances of docker images and them not be collided with ports? And how exactly p2p connecting with network would work in that case? I mean if i open another ports for next docker instance, will it find peers and be able connect to network?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.