Giter Site home page Giter Site logo

magi's Introduction

Magi ย ๐ŸŸ 

build license: AGPL v3 chat

Magi is an OP Stack rollup client written in Rust, designed to perform the same functionality as op-node. It is compatible with execution clients like op-geth. As an independent implementation, Magi aims to enhance the safety and liveness of the entire OP Stack ecosystem. Magi is still new, so we expect to find some bugs in the coming months. For critical infrastructure, we recommend using op-node.

Running

For convenience, we provide a simple Docker setup to run Magi and op-geth together. This guide assumes you have both docker and git installed on your machine.

Start by cloning the Magi repository and entering the docker subdirectory

git clone https://github.com/a16z/magi.git && cd magi/docker

Next copy .env.default to .env

cp .env.default .env

In the .env file, modify the L1_RPC_URL field to contain a valid Ethereum RPC. For the Optimism and Base testnets, this must be a Sepolia RPC URL. This RPC can either be from a local node, or a provider such as Alchemy or Infura.

By default, the NETWORK field in .env is optimism-sepolia, however base-sepolia is also supported.

Start the docker containers

docker compose up -d

If the previous step fails with a permission denied error, try running the command with sudo.

The docker setup contains a Grafana dashboard. To view sync progress, you can check the dashboard at http://localhost:3000 with the username magi and password op. Alternatively, you can view Magi's logs by running docker logs magi --follow.

Contributing

All contributions to Magi are welcome. Before opening a PR, please submit an issue detailing the bug or feature. Please ensure that your contribution builds on the stable Rust toolchain, has been linted with cargo fmt, passes cargo clippy, and contains tests when applicable.

Disclaimer

This code is being provided as is. No guarantee, representation or warranty is being made, express or implied, as to the safety or correctness of the code. It has not been audited and as such there can be no assurance it will work as intended, and users may experience delays, failures, errors, omissions or loss of transmitted information. Nothing in this repo should be construed as investment advice or legal advice for any particular facts or circumstances and is not meant to replace competent counsel. It is strongly advised for you to contact a reputable attorney in your jurisdiction for any questions or concerns with respect thereto. a16z is not liable for any use of the foregoing, and users should proceed with caution and use at their own risk. See a16z.com/disclosures for more info.

magi's People

Contributors

alessandromazza98 avatar barnabasbusa avatar borispovod avatar controlcpluscontrolv avatar distributedstatemachine avatar dk-pontem avatar evalir avatar francoabaroa avatar grapebaba avatar jgresham avatar kevinji avatar krauspt avatar krishnacore avatar leaferx avatar mattsse avatar merklefruit avatar mishanefedov avatar ncitron avatar refcell avatar rkrasiuk avatar shrimalmadhur avatar thinkafcod avatar xeno097 avatar zilayo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

magi's Issues

refactor: remove mutex in pipeline

Currently we use a mutex to store each stage of the derivation pipeline. This isn't very rustic, as we are sort of skirting around the borrow checker. We should look into how to refactor this into something cleaner and more efficient.

feat: add output root rpc method

The Optimism specs for the rollup client include a special RPC method called optimism_outputAtBlock which returns the L2 output root for a given block number. This is used by the proposer component to post the outputs to L1. While we're a long way away from connecting Magi to the proposer, we can pretty easily support this. Lets use the jsonrpsee crate for this.

More info on the RPC method here

feat(driver): full sync

Implement full sync such that is continually follows the chain based on data derived from L1. We can start this off with just following the finalized chain, but eventually want to follow the chain tip and handle reorgs.

Panic while running Magi locally

I am trying to do local development where I am using docker to setup everything except magi

Updated .env looks like
COMPOSE_PROFILES=metrics,${USE_OP_GETH:+op-geth},${USE_OP_ERIGON:+op-erigon}

And then I am using cargo run with appropriate parameters to run magi. (l1_rpc_url, network, jwt-secret) as params
It is throwing me following error while running

thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', src/l1/mod.rs:203:18

which is this line in l1/mod.rs. 2nd unwrap() in this.

let block = l2_provider
                .get_block_with_txs(l2_start_block - 1)
                .await
                .unwrap()
                .unwrap();

Everything works while using dockerized containers.
Am I missing something?

Ecotone Tracker

Ecotone upgrade changes overview (WIP)

  • support blobs retrieval in the L1-traversal stage (specs)
    • this will require an L1 CL connection, which currently Magi doesn't have as it's not needed.
    • for each batcher transaction of type 0x03 in a given block, we need to fetch blobs via the /eth/v1/beacon/blob_sidecars/ endpoint, with the indices filter to skip unrelated blobs.
    • blobs retrieved should be validated (but this can be skipped if the L1 CL is trusted)
    • blob transactions calldata needs to be ignored (not used in blobs version 0)
    • blobs need to be decoded
  • support ecotone L1 block info (op-node ref)
    • after Ecotone, the L1 Attributes transaction call changes from setL1BlockValues to setL1BlockValuesEcotone.
    • in the Ecotone activation block (except if it's not active since genesis) the ecotone upgrade transactions need to be handled with a special case.
    • TL;DR on setL1BlockValuesEcotone: The new input args are no longer ABI encoded function parameters, but are instead packed into 5 32-byte aligned segments (starting after the function selector).
    • other minor changes: e.g. basefeeScalar and blobBasefeeScalar go from u256 to u32.
  • add parent beacon block root
  • p2p gossip: added blocks/V3 topic

Update Debian Version in Dockerfile to Resolve GLIBC Compatibility Issue

Description of Changes

Currently, the image is being built using the rust:latest base image, which is based on debian:bullseye, where:

ldd --version
ldd (Debian GLIBC 2.31-13+deb11u6) 2.31

Then, the final binary is placed in the debian:buster image, where version is:

ldd --version
ldd (Debian GLIBC 2.28-10+deb10u2) 2.28

This leads to an error during application startup, due to a missing GLIBC_2.29 version, as shown below:

magi: /lib/aarch64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by magi)

To resolve this issue and ensure library compatibility, it's recommended to use the same Debian version for both images. In this case, it's proposed to switch from debian:buster-slim to debian:bullseye-slim.

Changes in Dockerfile

-FROM debian:buster-slim
+FROM debian:bullseye-slim

feat(chain): System Config Watcher

Overview

Introduce a System Config Watcher that watches for changes to the system config on new L1 blocks. This should be handled in the L1 Chain Watcher and be run for each new block. Note: if the system config changes, any batched transactions, in the entire block, will be affected by the system config change.

feat: handle reorgs

To follow the safe head, we need to be able to gracefully handle reorgs on L1. When the chain reorgs, we need to roll back to the most recent ancestor still on the chain, reset the pipeline, and restart from there.

bug: magi will drop some batch data when it reaches 4089330 of L2 block height on optimism-sepolia network

I started a magi and op-geth with optimism-sepolia network configs by docker-compose.

Here is my .env file configs blow:

NETWORK=optimism-sepolia

# should request a app key from alchemy
L1_RPC_URL=https://eth-sepolia.g.alchemy.com/v2/<app_key>
L1_WS_RPC_URL=wss://eth-sepolia.g.alchemy.com/v2/<app_key>

JWT_SECRET=bf549f5188556ce0951048ef467ec93067bc4ea21acebe46ef675cd4e8e015ff

RPC_PORT=9545

EXECUTION_CLIENT=op-geth

EXECUTION_CLIENT_AUTH_RPC_PORT=8551

EXECUTION_CLIENT_RPC_PORT=8545

EXECUTION_CLIENT_WS_PORT=8546

SYNC_MODE=full

COMPOSE_PROJECT_NAME=$NETWORK
COMPOSE_PROFILES=magi,${EXECUTION_CLIENT}

After block height 4089329, the hashes of producted blocks are all difference with hash of same height block of optimism-sepolia:

[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089322 0x4eec6f7a80c9f350eb872c4d8a3fa5cae43a014cd465a869dadf20f2b94408b5
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089323 0x5fc4dbc32a4000f242a36592e9dc38fa00434667fdc70944a3de11626a10eea5
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089324 0xfb5d2542cae811c27419a5f4d185b83ab0a96bbf8c0b6b5e27889eb74fca398f
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089325 0xcf86bd6ef18bae9c6d6ae63754f07e926c5a152a96b512095df332c3aa7993de
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089326 0xf236e5d4093df5c67fbd0d712bfc3f9b10204aebea5f7e8cf2c69b82dbef10e6
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089327 0xa53c06bca11e7afd3208021675baac4bee1cbedf36e837c2f5538a62f9ca3a68
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089328 0xe75c7f7f25da03fffb5cf19698a451fc49c1d8978288498996a2158811407901
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089329 0x2ea5dc85c900443e6ee9615876a413ef3f7f800792886e646f03cae258e30dc4
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089330 0x1bcbd7f104f34254cad46fd9545d00c3a537689e606e106ae66f2b0a5ee63f85
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[33mWARN^[[0m: invalid parent hash
[^[[36mFri, 08 Dec 2023 08:48:09^[[0m] ^[[33mWARN^[[0m: dropping invalid batch
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089331 0x167ede8ab58924d5253e13b7a4af1e46fe00b3ed4bad5c2e603ef39139cff2bd
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[33mWARN^[[0m: invalid parent hash
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[33mWARN^[[0m: dropping invalid batch
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089332 0xaacdc545483f47a790d7a90223f1ede1e9ca8f0e76cd92a43a66102269f40e60
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[33mWARN^[[0m: invalid parent hash
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[33mWARN^[[0m: dropping invalid batch
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089333 0xea8481d31f9ff5725f55409aaff49d289f07d704a3dd8e8e56b89aa2b653c011
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[33mWARN^[[0m: invalid parent hash
[^[[36mFri, 08 Dec 2023 09:00:12^[[0m] ^[[33mWARN^[[0m: dropping invalid batch
[^[[36mFri, 08 Dec 2023 09:00:13^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089334 0x9d0c3402fa4d17c9f0b538ef8d05928a0e6d202ddedccdfd2ea8b041284613b2
[^[[36mFri, 08 Dec 2023 09:00:13^[[0m] ^[[33mWARN^[[0m: invalid parent hash
[^[[36mFri, 08 Dec 2023 09:00:13^[[0m] ^[[33mWARN^[[0m: dropping invalid batch
[^[[36mFri, 08 Dec 2023 09:00:13^[[0m] ^[[34mINFO^[[0m: safe head updated: 4089335 0x743cf8112b4b091b989c9237de902f0db14c696bf604cdea385a43803ee7596c

feat(driver): fast sync

Follow chain head using derived data, but perform the initial sync by reading the L2 output oracle on L1 to find the current head hash and use the engine's built in sync mechanism to sync to that head.

`std::sync::RwLock` and tokio runtime

Hey,

I noticed that the std::sync::RwLock being used for the state in the driver might pose some challenges in the future. The RwLock from the standard library is synchronous and has the potential to block the entire runtime, similar to what was observed with the channel in L1.

At the moment, this isn't causing any issues, likely because the batch processing is going step by step. However, as the system evolves, this might lead to complications.

It might be worth considering replacing the standard RwLock with its asynchronous counterpart from tokio. If this route is taken, read/write operations for the state would also need to be made asynchronous. This would entail a significant refactoring effort in derive to use asynchronous constructs, such as replacing Iterators with the Stream.

I'd appreciate hearing your thoughts on this and would welcome any support or guidance if a decision is made to move forward with a PR.

RPC served on localhost isn't available outside of docker containers

Currently the Magi RPC is served on localhost

magi/src/rpc/mod.rs

Lines 110 to 114 in d25b253

pub async fn run_server(config: Arc<Config>) -> Result<SocketAddr> {
let port = config.rpc_port;
let server = ServerBuilder::default()
.build(format!("127.0.0.1:{}", port))
.await?;

This causes networking issues when running in a container as it'll only be available to the container's local host.

The port published in the docker compose file won't work if the docker network is in bridge mode (the default).

ports:
- 9200:9200
- "${RPC_PORT}:${RPC_PORT}"

Check signature for gossip blocks

Right now we do not check if the gossip block has the correct signature from the sequencer on it. This means anyone can gossip a block and Magi will be happy to ingest it. Checking the signature solves this issue.

Cannot Run the docker properly

Hello @ncitron
I'm keep trying docker compose up -d
but getting this error

2023-04-21 13:08:20 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:21 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:23 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:24 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:25 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:27 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:30 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:37 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:08:50 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:09:16 exec /scripts/start-op-geth.sh: no such file or directory
2023-04-21 13:10:08 exec /scripts/start-op-geth.sh: no such file or directory

looks like script folder is not involved

pls let me know how can i fix this

limit the size of files in the directory /var/lib/docker/volumes/optimism_data,

How can I limit the size of files in the directory /var/lib/docker/volumes/optimism_data, as these data occupy more than 700 GB, and I am currently unsure if I can delete the data? The files taking up the most space in this directory are located at /var/lib/docker/volumes/optimism_data/_data/geth/geth/chaindata/ancient#.

feat(derivation): prune old L1 block info

Currently State stores all previously seen L1Info instances. This is essentially a memory leak, and will cause Magi to crash eventually. We shouldn't need to store block information from more than sequence_window_size from the current batch epoch.

feat: add blocktime to chain config.

Right now the blocktime is hardcoded in many parts of the codebase to two seconds. We can refactor this to make it configurable in ChainConfig. This will be useful if other OP Stack chains decide to change the blocktime.

feat: use execution client to find restart block

Currently when Magi starts up it initializes itself by reading from the database. This really isn't needed, since we can just query the L2 rpc of the execution client to fetch the most recent finalized block. We can then find the epoch data we need by decoding the attributes deposited transaction (the first tx in the block).

This would allow us to remove the database entirely.

feat: check posted output root for fraud

Check the L1 to compare any output roots posted by the proposer to what we expect. If a mismatch is detected, very loudly warn the user via continuously posting warning messages to the CLI.

Might also be cool to have the Grafana dashboard display something if fraud is detected too.

related to #86

Alchemy api key used for testing fails with error: "Monthly capacity limit exceeded"

While trying to add a test I noticed that every call to the test rpc (with exposed alchemy api key) fails with the error "Monthly capacity limit exceeded".

let rpc = "https://eth-goerli.g.alchemy.com/v2/a--NIcyeycPntQX42kunxUIVkg6_ekYc";

This issue is currently also breaking existing tests because the ChainWatcher test requires to fetch some blocks but it keeps getting stuck for this reason.

Solutions that come to mind:

  • use a public rpc url
  • use the rpc url loaded from env

Better error handling in the Telemetry module

The initialization function in the telemetry module could return an uninformative error message or fail if an error occurred while launching Prometheus Exporter. In particular, if an error occurs while parsing the address string passed to run, the resulting error message will not contain any additional context about the cause of the error. This can make it difficult to debug telemetry issues, especially in production environments

Frame derivation is under-constrained

Frames are derived possibly when they shouldn't be. The batcher transactions stage is accepting frames as valid even if the is_last byte isn't set to 0 or 1. This seems to diverge from the specification here and op-node implementation here.

Impact: an adversarial sequencer can sequence frames that are otherwise valid (except for setting the last byte incorrectly), leading Magi to derive an incorrect frame and therefore an incorrect state root.

The fix is straightforward (just constrain the byte to {0,1}).

Unable to run on Ubuntu 20.04

I've been trying to run Magi on my Ubuntu 20.04. After running docker compose up -d, below is the status:

[+] Running 5/0
 โœ” Container grafana        Running
 โœ” Container prometheus     Running
 โœ” Container node-exporter  Running
 โœ” Container op-geth        Running
 โœ” Container magi           Started 

which clearly shows there is something wrong with container magi. I run docker logs magi and below is what I see:

error: Found argument '--logs-dir' which wasn't expected, or isn't valid in this context

        If you tried to supply `--logs-dir` as a value rather than a flag, use `-- --logs-dir`

USAGE:
    magi --network <NETWORK> --jwt-secret <JWT_SECRET> --l1-rpc-url <L1_RPC_URL> --l2-rpc-url <L2_RPC_URL> --l2-engine-url <L2_ENGINE_URL> --data-dir <DATA_DIR> --sync-mode <SYNC_MODE>

Note that due to some port conflict I had to change ports in docker compose yaml file, which I attach here.
magi.txt

let me know how I can solve the issue.

[Bug] `State::is_full` prevents Optimism Sepolia to sync

Hey,

We're in the process of preparing a PR to work with Sepolia since we plan to launch our testnet atop Sepolia.

When attempting to sync with the latest Optimism Sepolia testnet, consider the following config:

(Note: I haven't thoroughly tested the config, but it should work. We will eventually move it to a built-in config):

(I didn't test the config well, but should work, later we will move it into built-in config):

{
    "genesis": {
      "l1": {
        "hash": "0x48f520cf4ddaf34c8336e6e490632ea3cf1e5e93b0b2bc6e917557e31845371b",
        "number": 4071408
      },
      "l2": {
        "hash": "0x102de6ffb001480cc9b8b548fd05c34cd4f46ae4aa91759393db90ea0409887d",
        "number": 0
      },
      "l2_time": 1691802540,
      "system_config": {
        "batcherAddr": "0x8F23BB38F531600e5d8FDDaAEC41F13FaB46E98c",
        "overhead": "0x00000000000000000000000000000000000000000000000000000000000000bc",
        "scalar": "0x00000000000000000000000000000000000000000000000000000000000a6fe0",
        "gasLimit": 30000000
      }
    },
    "block_time": 2,
    "max_sequencer_drift": 600,
    "seq_window_size": 3600,
    "channel_timeout": 300,
    "l1_chain_id": 11155111,
    "l2_chain_id": 11155420,
    "regolith_time": 0,
    "batch_inbox_address": "0xff00000000000000000000000000000011155420",
    "deposit_contract_address": "0x16fc5058f25648194471939df75cf27a2fdc48bc",
    "l1_system_config_address": "0x034edd2a225f7f429a63e0f1d2084b9e0a93b538"
  }
  

Here's what's happening: Sepolia hasn't had batches for a significant amount of time (I believe up until 4096599). The L1 fetches blocks, but the state safe epoch doesn't update as no batches processed. This leads to a situation where, after a certain point, the node becomes unresponsive because subsequent blocks aren't being processed and the state indicates it's full.

Refer to this code segment:

magi/src/driver/mod.rs

Lines 254 to 262 in 36e3e07

async fn handle_next_block_update(&mut self) -> Result<()> {
let is_state_full = self
.state
.read()
.map_err(|_| eyre::eyre!("lock poisoned"))?
.is_full();
if !is_state_full {
let next = self.chain_watcher.block_update_receiver.try_recv();

I'm unclear on the actual purpose of state.is_full(). If its primary function isn't to allocate processing time for the collected batches, it's worth noting that the node itself has a channel which limits the number of blocks. Additionally, there's a channel for batches which is unbounded. If the unbounded channel is problematic, it could be given a fixed size. But currently, it seems to be causing more problems than it's solving.

There also appears to be an issue concerning block creation for an empty sequencer window:

let batch = if derived_batch.is_none() {
let state = self.state.read().unwrap();
let current_l1_block = state.current_epoch_num;
let safe_head = state.safe_head;
let epoch = state.safe_epoch;
let next_epoch = state.epoch_by_number(epoch.number + 1);
let seq_window_size = self.config.chain.seq_window_size;
if let Some(next_epoch) = next_epoch {
if current_l1_block > epoch.number + seq_window_size {
let next_timestamp = safe_head.timestamp + self.config.chain.blocktime;
let epoch = if next_timestamp < next_epoch.timestamp {
epoch
} else {
next_epoch
};
Some(Batch {
epoch_num: epoch.number,
epoch_hash: epoch.hash,
parent_hash: safe_head.parent_hash,
timestamp: next_timestamp,
transactions: Vec::new(),
l1_inclusion_block: current_l1_block,
})
} else {
None
}
} else {
None
}
} else {
derived_batch
};

This segment seems to generate several batches in a row with identical timestamps. As a result, op-geth produces an error and magi crashes. We observed a similar issue on the devnet, but we simply disabled it there. The block hash appears accurate , but the state::safe_epoch doesn't update during attribute processing (specifically within the pipeline loop; it only updates after all potential attributes are refreshed).

For testing: the first block from Sepolia's genesis might not be accepted by magi, given that it isn't marked as finalized. A quick workaround is to replace it with 0 or the latest block in the start of the driver. Please be aware that the syncing process can be time-consuming. Also, attaching L2 Sepolia genesis.

So my purpose is:
My purpose is:

  • I believe the state.is_full may be unnecessary.
  • I intend to address the block creation issue. At the moment, updating the state's safe_epoch after batch generation seems promising. Alternatively, utilizing the engine_driver state might be the way forward.

I'd appreciate your feedback. There's a possibility I might be off the mark, but that's how it appears to me at present.
Let me know what do you think i could be wrong, but looks so.

fix: properly decode errors from the engine api

Currently if the engine API errors, Magi does not correctly decode it and crashes. We should at the least be able to decode the error and surface it. Maybe we can even start handling some of the ones that are recoverable.

feat: store debug logs in file

To aid in debugging, we should have write the debug logs to a file somewhere. I suspect we can do that with our current tracing infra.

op-erigon backward compatibility issue

Hello! I'm Tei from Test in Prod, working on op-erigon.
I want to let you know that there is a breaking change in op-erigon, breaking Magi's docker-compose.

The breaking change is removing --externalcl flag, which was a required option until op-erigon v2.39.0-0.1.1.
In subsequent versions, this behavior has been made the default, and using the flag will result in an error.

It seems Magi's docker-compose file is using latest tag of op-erigon docker image, with an entrypoint script using --externalcl.

So if the docker image of the next release is uploaded, running Magi with op-erigon will be broken.
You should specify the version of op-erigon image, or remove --externalcl when the next version is released.

The new version will be released in the next few days. You can get the announcement from our discord channel.

Thank you!

Error handling / HeadInfo

Source: HeadInfo in /backend/types.rs

  1. The "try_from" implementation is returning "eyre::Report" instead of "Result", which can make debugging more difficult.
  2. The "From for sled::IVec>" implementation is using "panic!" in case serialization fails, which is not a good error handling strategy.

L1 Compatibility

Hey all ๐Ÿ‘‹๐Ÿป

Just came across the project. I have seen the hype on twitter. Awesome work !!!

Just wanted to ask: Do you have the ability to run magi on top of other EVM L1 clients or is currently only op-geth supported?

Its so coincidental to see you guys building another L2 on top of OPstack. :) We are building also on top of OPstack to enable pessimistic rollups @ethereum-pessimism

BR

Timo

Add support for custom networks via JSON config file

op-node has a flag --rollup.config=/rollup.json to allow it to use network configuration derived from a JSON file.
This would be a useful addition to Magi, for the following reasons:

  1. Right now there is no way to use a different network than the two pre-configured ones
  2. This would make it a lot easier to run Magi on a devnet or any custom op-stack network

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.