Giter Site home page Giter Site logo

nuffle-labs / data-availability Goto Github PK

View Code? Open in Web Editor NEW
45.0 13.0 25.0 1.02 MB

NEAR as data availability!

License: MIT License

Nix 3.64% Shell 2.48% Rust 59.56% Makefile 2.87% Dockerfile 1.68% Go 19.01% C 1.61% Just 0.93% Solidity 8.23%
data-availability ethereum nearprotocol rollup

data-availability's Introduction

Data Availability

Utilising NEAR as storage data availability with a focus on lowering rollup DA fees.

Components

Herein outlines the components of the project and their purposes.

Blob store contract

This contract provides the store for arbitrary DA blobs. In practice, these "blobs" are sequencing data from rollups, but they can be any data.

NEAR blockchain state storage is pretty cheap. At the time of writing, 100KiB is a flat fee of 1NEAR. To limit the costs of NEAR storage even more, we don't store the blob data in the blockchain state.

It works by taking advantage of NEAR consensus around receipts. When a chunk producer processes a receipt, there is consensus around the receipt. However, once the chunk has been processed and included in the block, the receipt is no longer required for consensus and can be pruned. The pruning time is at least 3 NEAR epochs, where each epoch is 12 Hours; in practice, this is around five epochs. Once the receipt has been pruned, archival nodes are responsible for retaining the transaction data, and we can even get the data from indexers.

A blob commitment is a set of NEAR transaction IDs, depending on the blob size. To verify the submission of a blob on NEAR, you can verify with your received commitment via a Merkle inclusion proof with the near light client or use the rough-ish ZK Light Clients. For dirty work, you can also call the Aurora Rainbow Bridge via a view call on Ethereum, with some data transformation. There was some WIP work done to work with the experimental Merkle pollarding from near-light-client here.

What this means:

  • consensus is provided around the submission of a blob by NEAR validators
  • full nodes store the function input data for at least three days
  • archival nodes can store the data for longer
  • we don't occupy consensus with more data than needs to be
  • indexers can also be used, and this Data is currently indexed by all significant explorers in NEAR
  • the commitment is available for a long time, and the commitment is straightforward to create

Light client

A trustless off-chain light client for NEAR with DA-enabled features.

The light client provides easy access to transaction and receipt inclusion proofs within a block or chunk. This is useful for checking any dubious blobs which may not have been submitted or validating that a blob has been submitted to NEAR.

A blob submission can be verified by:

  • taking the NEAR transaction ID from Ethereum for the blob commitment. If you're feeling specific, Ask the light client for inclusion proof for the transaction ID or the receipt ID; this will give you a Merkle inclusion proof for the transaction/receipt.
  • once you have the inclusion proof, you can ask the light client to verify the proof for you, or advanced users can manually verify it themselves.
  • armed with this knowledge, rollup providers can have advanced integration with light clients and build proving systems around it.

In the future, we will provide extensions to light clients to supply non-interactive proofs for blob commitments and other data availability features.

It's also possible that the light client may be on-chain for the header syncing and inclusion proof verification, but this is a low priority right now.

TODO: write and draw up extensions to the light client and draw an architecture diagram

HTTP Sidecar

This sidecar facilitates all of the NEAR and Rust interactions over a network. With this approach, we can let anyone build a rollup in any language and reduce the maintenance effort of a client SDK for every rollup SDK. To use it, you can make use of the go library or any other HTTP client.

  • JsonRPC coming soon.
  • Shareable configs coming soon.

Deploying it is simple, it all uses the config via the http-config.json or can be configured via a PUT to the /configure endpoint.

Endpoints can be viewed here. We're in the process of creating a bruno collection, so only the Plasma endpoints are on there, but feel free to add the other ones if you're adding them - we'd welcome the PR.

It is OP Plasma-ready.

Further deployment info can be seen in the compose file at the root of the repo

DA RPC Client

This client has been usurped by the sidecar approach for most rollup SDKs; as such, we recommend using the sidecar from now on, unless you use Rust, you can natively use this crate. If there are any dependency incompatibilities, feel free to raise an issue or submit a PR. We strive to make our crates as permissive as we can.

These crates allow a client to interact with the blob store. It can be treated as a "black box", where blobs go in, and [transaction_id] emerges.

The da-rpc crate is the rust client, which anyone can use if they prefer rust in their application. The responsibility of this client is to provide a simple interface for interacting with NEAR DA.

Integrations

We have some proof of concept works for integrating with other rollups. We are working to prove the system's capabilities and provide a reference implementation for others to follow. They are being actively developed, so they are in flux.

Each rollup has different features and capabilities, even if built on the same SDK. The reference implementations are not necessarily "production grade". They serve as inspiration to help integrators make use of NEAR DA in their system. Our ultimate goal is to make NEAR DA as pluggable as any other tool you might use. This means our heavy focus is on proving, submission and making storage as fair as possible.

Architecture Diagrams can be viewed at this directory

OP Stack

https://github.com/near/optimism

We have integrated it with the Optimism OP stack. Utilising the Batcher for submissions to NEAR and the proposer for submitting NEAR commitment data to Ethereum.

We also have created endpoints for plasma in the sidecar.

CDK Stack

0xPolygon/cdk-validium-node#129

We have natively integrated with the Polygon CDK stack and implemented all their E2E suite.

Arbitrum Nitro

https://github.com/near/nitro

We have integrated a small plugin into the DAC daserver. This is much like our http sidecar and provides a very modular integration into NEAR DA whilst supporting arbitrum DACs. In the future, this will likely be the easiest way to support NEAR DA as it acts as an independent sidecar which can be scaled as needed. This also means that the DAC can opt in and out of NEAR DA, lowering their infrastructure burden. With this approach, the DAC committee members need a "dumb" signing service, with the store backed by NEAR.

๐Ÿ‘ท๐Ÿšง Intregrating your own rollup ๐Ÿšง๐Ÿ‘ท

NEAR DA aims to be as modular as possible. Most rollups now support some form of DAserver, such as daserver on Arbitrum Nitro, plasma on OP, and the submission interface on CDK.

Implementing your rollup should be straightforward, assuming you can utilise da-rpc or da-rpc-go(with some complexity here). All the implementations so far have been different, but the general rules have been:

  • find where the sequencer normally posts batch data, for optimism it was the batcher, for CDK it's the Sequence Sender and plug the client in.
  • find where the sequencer needs commitments posted, for optimism it was the proposer, and CDK the synchronizer. Hook the blob reads from the commitment there.

The complexity arises depending on how pluggable the contract commitment data is. If you can add a field, that would be great! But these waters are mostly unchartered.

If your rollup does anything additional, feel free to hack, and we can try to reach the goal of NEAR DA being as modular as possible.

Getting started

NIX/Devenv, Makefiles, Justfiles and scripts are floating around, but here's a rundown of how to start with NEAR DA. The main objectives are:

  • create near account
  • fund near account (testnet faucet or otherwise)
  • deploy contract (this document/Makefile)
  • sidecar: update http-config.json using the info from keystore & contract. You can use what we do in our tests if you like:
HTTP_API_TEST_SECRET_KEY=YOUR_SECRET_KEY(is the "private_key" field) \
HTTP_API_TEST_ACCOUNT_ID=YOUR_ACCOUNT_ID \
HTTP_API_TEST_NAMESPACE=null
scripts/enrich.sh

Prerequisites

Rust, go, cmake, and friends should be installed. For a list of required installation items, please look at flake.nix#nativeBuildInputs. If you use Nix, you're in luck! Just do direnv allow, and you're good to go.

Ensure you have setup near-cli. For the Makefiles to work correctly, you need to have the near-cli-rs version of NEAR-CLI. Make sure you setup some keys for your contract, the documentation above should help. You can write these down, or query these from ~/.near-credentials/** later.

If you didn't clone with submodules, sync them: make submodules

Note that there are some semantic differences between near-cli-rs and near-cli-js. Notably, the keys generated with near-cli-js used to have an account_id key in the JSON object. But this is omitted in near-cli-rs because it's already in the filename, but some applications require this object. So you may need to add it back in.

If using your contract

If you're using your own contract, you must build it yourself and make sure you set the keys.

To build the contract:

make build-contracts

The contract will now be in ./target/wasm32-unknown-unknown/release/near_da_blob_store.wasm.

Now, to deploy, once you've decided where you want to deploy and have permission to do so. Set $NEAR_CONTRACT to the address you want to deploy and sign with. Advanced users should look at the command and adjust it as needed.

Next up: make deploy-contracts

Remember to update your .env file for DA_KEY, DA_CONTRACT, and DA_ACCOUNT for later use.

If deploying optimism

First, clone the repository

Configure ./ops-bedrock/.env.example. This needs copying without the .example suffix and adding the keys, contract address, and signer from your NEAR wallet. It should work out of the box.

If deploying optimism on arm64

You can use a docker image to standardize the builds for da-rpc-sys and genesis.

da-rpc-sys-unix This will copy the contents of da-rpc-sys-docker generated libraries to the gopkg/da-rpc folder.

op-devnet-genesis-docker This will create a docker image to generate the genesis files.

op-devnet-genesis

This will generate the genesis files in a docker container and push the files to the .devnet folder.

make op-devnet-up This should build the docker images and deploy a local devnet for you

Once up, observe the logs.

make op-devnet-da-logs

You should see got data from NEAR and submitting to NEAR

Of course, to stop

make op-devnet-down

If you just wanna get up and running and have already built the docker images using something like make bedrock images, there is a docker-compose-testnet.yml in ops-bedrock you can play with.

If deploying polygon CDK

First, clone the repository

Now, we have to pull the docker image containing the contracts.

make cdk-images

why is this different to op-stack?

When building the contracts in cdk-validium-contracts, it does a little bit more than build contracts. It creates a local eth devnet, deploys the various components (CDKValidiumDeployer & friends). Then it generates genesis and posts it to L1 at some arbitrary block. The block number that the L2 genesis gets posted to is non-deterministic. This block is then fed into the genesis config in cdk-validium-node/tests. Because of this reason, we want an out of the box deployment, so using a pre-built docker image for this is incredibly convenient.

It's fairly reasonable that, when scanning for the original genesis, we can just query a bunch of blocks between 0..N for the genesis data. However, this feature doesn't exist yet.

Once the image is downloaded, or advanced users build the image and modify the genesis config for tests, we need to configure an env file again. The envfile example is at ./cdk-stack/cdk-validium-node/.env.example, and should be updated with the abovementioned variables.

Now we can do:

cdk-devnet-up

This will spawn the devnet and an explorer for each network at localhost:4000(L1) and localhost:4001`(L2).

Run a transaction, and check out your contract on NEAR, verify the commitment with the last 64 bytes of the transaction made to L1.

You'll get some logs that look like:

time="2023-10-03T15:16:21Z" level=info msg="Submitting to NEARmaybeFrameData{0x7ff5b804adf0 64}candidate0xfF00000000000000000000000000000000000000namespace{0 99999}txLen1118"
2023-10-03T15:16:21.583Z	WARN	sequencesender/sequencesender.go:129	to 0x0DCd1Bf9A1b36cE34237eEaFef220932846BCD82, data: 438a53990000000000000000000000000000000000000000000000000000000000000060000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb922660000000000000000000000000000000000000000000000000000000000000180000000000000000000000000000000000000000000000000000000000000000233a121c7ad205b875b115c1af3bbbd8948e90afb83011435a7ae746212639654000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c2f3400000000000000000000000000000000000000000000000000000000000000005ee177aad2bb1f9862bf8585aafcc34ebe56de8997379cc7aa9dc8b9c68d7359000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c303600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040b5614110c679e3d124ca2b7fca6acdd6eb539c1c02899df54667af1ffc7123247f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59	{"pid": 7, "version": ""}
github.com/0xPolygon/cdk-validium-node/sequencesender.(*SequenceSender).tryToSendSequence
	/src/sequencesender/sequencesender.go:129
github.com/0xPolygon/cdk-validium-node/sequencesender.(*SequenceSender).Start
	/src/sequencesender/sequencesender.go:69
2023-10-03T15:16:21.584Z	DEBUG	etherman/etherman.go:1136	Estimating gas for tx. From: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, To: 0x0DCd1Bf9A1b36cE34237eEaFef220932846BCD82, Value: <nil>, Data: 438a53990000000000000000000000000000000000000000000000000000000000000060000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb922660000000000000000000000000000000000000000000000000000000000000180000000000000000000000000000000000000000000000000000000000000000233a121c7ad205b875b115c1af3bbbd8948e90afb83011435a7ae746212639654000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c2f3400000000000000000000000000000000000000000000000000000000000000005ee177aad2bb1f9862bf8585aafcc34ebe56de8997379cc7aa9dc8b9c68d7359000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c303600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040b5614110c679e3d124ca2b7fca6acdd6eb539c1c02899df54667af1ffc7123247f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59	{"pid": 7, "version": ""}
2023-10-03T15:16:21.586Z	DEBUG	ethtxmanager/ethtxmanager.go:89	Applying gasOffset: 80000. Final Gas: 246755, Owner: sequencer	{"pid": 7, "version": ""}
2023-10-03T15:16:21.587Z	DEBUG	etherman/etherman.go:1111	gasPrice chose: 8	{"pid": 7, "version": ""}

For this transaction, the blob commitment was 7f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59

And if I check the CDKValidium contract 0x0dcd1bf9a1b36ce34237eeafef220932846bcd82, the root was at the end of the calldata.

0x438a53990000000000000000000000000000000000000000000000000000000000000060000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb922660000000000000000000000000000000000000000000000000000000000000180000000000000000000000000000000000000000000000000000000000000000233a121c7ad205b875b115c1af3bbbd8948e90afb83011435a7ae746212639654000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c2f3400000000000000000000000000000000000000000000000000000000000000005ee177aad2bb1f9862bf8585aafcc34ebe56de8997379cc7aa9dc8b9c68d7359000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000651c303600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040b5614110c679e3d124ca2b7fca6acdd6eb539c1c02899df54667af1ffc7123247f5aa2475d57f8a5b2b3d3368ee8760cffeb72b11783779a86abb83ac09c8d59

If deploying arbitrum nitro

Build daserver/datool: make target/bin/daserver && make target/bin/datool

Deploy your DA contract as above.

Update daserver config to introduce new configuration fields:

"near-aggregator": { "enable": true, "key": "ed25519:insert_here", "account": "helloworld.testnet", "contract": "your_deployed_da_contract.testnet", "storage": { "enable": true, "data-dir": "config/near-storage" } },

target/bin/datool client rpc store --url http://localhost:7876 --message "Hello world" --signing-key config/daserverkeys/ecdsa

Take the hash, check the output:

target/bin/datool client rest getbyhash --url http://localhost:7877 --data-hash 0xea7c19deb86746af7e65c131e5040dbd5dcce8ecb3ca326ca467752e72915185

data-availability's People

Contributors

caseylove avatar dependabot[bot] avatar dndll avatar ecp88 avatar encody avatar firatnear avatar firatsertgoz avatar hyodar avatar iavl avatar palozano avatar taco-paco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data-availability's Issues

Extract DA features out of the light client

We want to keep the light client a black box and apply extensions to its capabilities with near DA. At its current state, we write the extensions onto it.

Expose a plugin system (light client as a library) to live as a standalone entity, and we can work with it for DAS. This means we can eventually release the light client and get it audited without it affecting DAS.

Split polygon cdk into a standalone repository

blocked by #31

### Tasks
- [ ] Pushing docker images
- [ ] Additional ticket for CI
- [ ] Remove builder magic for docker images
- [ ] Remove go workspace hack
- [ ] Use the newly published da-rpc-go pkg

Expose a singular FFI interface for golang -> op-rpc-sys

Right now the FFI is utilised in 3 places in optimism:

  • da_client, used by op-node to instantiate the da client
  • calldata_source, used by op-node to join Frames with FrameData (read from near)
  • op-service/txmgr, used by the batcher to submit FrameData to near

This should use one op-near module that:

  • creates the client
  • exposes any dtos
  • any utilities needed for transformations
  • error reading from the err_msg ptr

op-node & batcher should utilise this for their dtos and client instantiation.

Blob store clearance

Description

Currently, we manually clear the store using clear_all. We should expose functionality in the blob store to clear after a time period and allow this to be configurable per namespace.

If a rollup has a required DTD of 7 days, we can clear it after around 8 days to be safe.

Blob store storage management

Currently, we just deposit onto the blob store, we should pass this onto users since multiple namespaces could be supported in the store.

Security Policy violation SECURITY.md

This issue was automatically created by Allstar.

Security Policy Violation
Security policy not enabled.
A SECURITY.md file can give users information about what constitutes a vulnerability and how to report one securely so that information about a bug is not publicly visible. Examples of secure reporting methods include using an issue tracker with private issue support, or encrypted email with a published key.

To fix this, add a SECURITY.md file that explains how to handle vulnerabilities found in your repository. Go to https://github.com/near/rollup-data-availability/security/policy to enable.

For more information, see https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository.


This issue will auto resolve when the policy is in compliance.

Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer.

Investigate if we should use `near-fetch`

We are reimplementing a bunch of stuff near-fetch already has - Ideally we just contribute work there (unless they're using incompatible runtimes), but we should try to reach a sovereign client that isn't broken by these things.

near-fetch also already has:

  • memoization of nonces
  • retryability

If it doesn't have archival failover, we should probably roll up our stuff into that.

[Placeholder] Onchain light client

It's relatively easy to do, we don't quite need it right now and introduces actor complexity for submitting headers, proofs and such.

CDK Submission recovery on ethereum error

Description

When a submission passes but the subsequent Ethereum transaction fails, the NEAR submission is replayed by CDK endlessly.

We should have a mechanism to understand:

  • why it reverts, so far all the gas estimator says is "reverted"
  • recover if we submitted already

Attached is some logs.
_cdk-validium-sequence-sender_logs (3).txt

### Tasks
- [x] Why contract reverts
- [x] Move DA submission outside of failover

Split the op-stack into a standalone repository

make sure to retain the same structure so we can get upstream updates for free.

Blocked by #31

### Tasks
- [ ] Same structure as optimism
- [ ] CI ticket
- [ ] Pulling da-rpc-go
- [ ] Remove go workspace hack
- [ ] Remove builder magic for docker images
- [ ] Use the newly published da-rpc-go pkg

Remove submodules and clean out repo

We're public now, so no need for submodules

remove:

  • light-client (introduce same CI as here there)
  • op-stack (fix any weird makefile docker stuff)
  • cdk (move into near)

publish:

  • da-rpc-go (on merge)
  • da-ffi docker image to this package
  • cdk-docker image to own package

Light client tests and docs

There are tests for the main protocol and some unit tests for features, but we should try to get this up to a nice standard of testing and rustdoc

Also lints

Magi as a sequencer

Description

Determine what needs to be done to use magi as a sequencer.

Additional info

The original op-node is incredibly unreliable, with panics and null pointers in most places, and is written in go. We have a partial implementation of DA on magi over at ./op-stack/optimism-rs. The maintainers of Magi are very receptive to working together and for us to support Magi's development, especially around the DA interface issues. It has been confirmed that we shouldn't use Magi in production yet because it is new and unaudited, but for us, it would probably be easier to work with Magi since it is in rust.

The main issue is magi is missing a bunch of endpoints that the sequencer/batcher actors need, whilst it is able to run as a non-sequencer, it needs a few endpoints to be able to work well. So far, the endpoints I found are:

  • optimism_rollupConfig
  • optimism_sync

Additionally, this work might mitigate any headaches we have with go -> rust as we build the client for the DA layer, since the current client is written in go(./op-stack/openrpc) and is mostly untested.

Harden deployment of optimism

Description

We need to harden the deployment of Optimism in a development & production setting. Right now, both the testnet & devnet are set up in various bash scripts, which tend to fail every time.

Since everything was setup in scripts and mounting random directories, the system was hard to deploy as a container, so everything was on one server. Ultimately we need to be able to scale these things, so containerisation without relying on scripts is necessary, files should be mounted, and environments should be pre-created and bootstrapped.

We should use envfiles and limit any custom scripts as much as possible to prepare a reasonable deployment that doesn't require intervention.

Also, we should ensure the NEAR private keys we provide can be changed, right now there is discrepancies because of near-api-js/api-rs. Assign one access key to batcher and one to block derivation (that should be read only soon anyway)

Also, we should allocate one Access key per actor (batcher, proposer & sequencer)

Decide on blob root approach

Right now, the blob commitment is built in celestia's blob module, which uses a namespace merkle tree and MMR to build a Merkle root, which is posted to Ethereum along with the block height.

We should decide whether we:

  • build the root ourselves in the client side, and post this when we submit the blobs
  • build the root in the contract and return it

Relates to #8 - blocked by it

Update Namespace to use [u8 ++ u32]

Namespaces are currently unbounded, update them in the contract, node & client to be a u32, this should cover plenty of integrations.

Edit: added version byte to prefix

Add note in the README how to use Aurora's light client

We should add a note in the README, such that if you don't trust our light client it's fine, we can relay to Aurora, and even work this into the application if you're willing to pay the gas for verification onchain

Needs: DA-RPC merkle root

Optimise gas calls for contract layer

We don't utilise any of the data we provide to the contract, but we still deserialise it.

We should try to remove as much of the gas from calls as much as possible. Some ideas were:

  • remove bindgen
  • no_std? (see keypom contracts, not the main one)
  • no serde deserialisation
  • no logging
  • no panics

This will require updating the rust client to pass the borsh serialized bytes when submitting, and also require modifications on retrieval to deserialise the opaque bytes.

[WIP] Data Availability Sampling

As of writing this issue, we provide DAS 0.1, which exposes merkle inclusion proofs over a HTTP API in the light client. The workflow is as follows:

Assume client is synced

L2 -> Contract: Submits blob at Tx(ABC)
Contract returns [height, blob_commitment_root]
L2 -> L1: [height, blob_commitment_root]
Anyone can find the transaction id for height
Anyone -> Light client/Validator/Archive: request inclusion proof for transaction id OR receipt
Anyone can manually calculate inclusion of the transaction/receipt, if adept enough
Anyone -> Light client: Verify proof
Anyone can read the contract store at blob_commitment_root and calculate the root themselves

This works off of an assumption that NEAR validators:

  • have not been compromised
  • will not withhold proofs to the light client (currently get_light_client_proof is EXPERIMENTAL)
  • have not removed the data
  • will hold the data for a period of time

As well as the light client:

  • is synced correctly
  • has not been subject to a long range attack
  • has the canonical head
  • will not withhold proofs to the user

To limit trust assumptions, we have to

Better repository structure

Currently, the naming of everything and place it lives is unorthodox.

For example:

  • ./op-stack/da-rpc implies this is for optimism, but this is also used by polygon
  • ./crates/op-rpc-* is the same as above
### Tasks
- [x] Rename ./crates/op-rpc to `./crates/da-rpc` and *-sys, make sure the manifest is `near-da-rpc` and `*-sys`
- [x] Ensure docker images build and the naming is appropriate
- [x] Move ./op-stack/da-rpc to ./gopkg/da-rpc-go and ensure the package is named `github.com/near/rollup-data-availability/da-rpc-go`, ensure Docker images and FFI works.
- [x] go.work for cdk & op-stack should be updated
- [x] scripts for *-sys install should point to new directory
- [x] Ensure the header file for da-rpc-sys is nice and works with the build script

Convert op-stack/openrpc to rust and expose it to op-node

Since we can't use magi as a sequencer yet, and the openrpc implementation is mainly untested, it needs work to build a reasonable integration with near.

The go-near-api built by folks at Aurora doesn't expose some functionality to make view rpc calls, we did try to do this via reflection but there were issues. I think the best foot forward here is to build a client in rust that we can control and then expose it to op-node & friends.

Note: the client work is implemented in op-stack/optimism-rs, we need to expose FFI to this and then implement tests and such.

Private forks

Since this project hasn't been released yet, we need to move the forks from my personal to private repos at Pagoda or here. This means we can build our own CI and our Infrastructure and roll up any public stability enhancements into the main node until we eventually announce this project.

error when make -C ./crates/da-rpc-sys

thread 'main' panicked at 'cbindgen not found in path', crates/da-rpc-sys/build.rs:15:19
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.