Giter Site home page Giter Site logo

ethereum-optimism / optimism Goto Github PK

View Code? Open in Web Editor NEW
5.1K 111.0 2.8K 280.17 MB

Optimism is Ethereum, scaled.

Home Page: https://optimism.io

License: MIT License

TypeScript 13.87% JavaScript 0.08% Solidity 20.83% Shell 0.39% Makefile 0.24% Go 63.32% Python 0.29% Dockerfile 0.14% Assembly 0.66% HCL 0.05% Rust 0.13%
ethereum ovm rollup l2-scaling optimism

optimism's Introduction



Optimism

Optimism is Ethereum, scaled.


Table of Contents

What is Optimism?

Optimism is a project dedicated to scaling Ethereum's technology and expanding its ability to coordinate people from across the world to build effective decentralized economies and governance systems. The Optimism Collective builds open-source software for running L2 blockchains and aims to address key governance and economic challenges in the wider cryptocurrency ecosystem. Optimism operates on the principle of impact=profit, the idea that individuals who positively impact the Collective should be proportionally rewarded with profit. Change the incentives and you change the world.

In this repository, you'll find numerous core components of the OP Stack, the decentralized software stack maintained by the Optimism Collective that powers Optimism and forms the backbone of blockchains like OP Mainnet and Base. Designed to be "aggressively open source," the OP Stack encourages you to explore, modify, extend, and test the code as needed. Although not all elements of the OP Stack are contained here, many of its essential components can be found within this repository. By collaborating on free, open software and shared standards, the Optimism Collective aims to prevent siloed software development and rapidly accelerate the development of the Ethereum ecosystem. Come contribute, build the future, and redefine power, together.

Documentation

Specification

If you're interested in the technical details of how Optimism works, refer to the Optimism Protocol Specification.

Community

General discussion happens most frequently on the Optimism discord. Governance discussion can also be found on the Optimism Governance Forum.

Contributing

Read through CONTRIBUTING.md for a general overview of the contributing process for this repository. Use the Developer Quick Start to get your development environment set up to start working on the Optimism Monorepo. Then check out the list of Good First Issues to find something fun to work on! Typo fixes are welcome; however, please create a single commit with all of the typo fixes & batch as many fixes together in a PR as possible. Spammy PRs will be closed.

Security Policy and Vulnerability Reporting

Please refer to the canonical Security Policy document for detailed information about how to report vulnerabilities in this codebase. Bounty hunters are encouraged to check out the Optimism Immunefi bug bounty program. The Optimism Immunefi program offers up to $2,000,042 for in-scope critical vulnerabilities.

Directory Structure

├── docs: A collection of documents including audits and post-mortems
├── op-batcher: L2-Batch Submitter, submits bundles of batches to L1
├── op-bindings: Go bindings for Bedrock smart contracts.
├── op-bootnode: Standalone op-node discovery bootnode
├── op-chain-ops: State surgery utilities
├── op-challenger: Dispute game challenge agent
├── op-e2e: End-to-End testing of all bedrock components in Go
├── op-heartbeat: Heartbeat monitor service
├── op-node: rollup consensus-layer client
├── op-preimage: Go bindings for Preimage Oracle
├── op-program: Fault proof program
├── op-proposer: L2-Output Submitter, submits proposals to L1
├── op-service: Common codebase utilities
├── op-ufm: Simulations for monitoring end-to-end transaction latency
├── op-wheel: Database utilities
├── ops: Various operational packages
├── ops-bedrock: Bedrock devnet work
├── packages
│   ├── chain-mon: Chain monitoring services
│   ├── common-ts: Common tools for building apps in TypeScript
│   ├── contracts-bedrock: Bedrock smart contracts
│   ├── contracts-ts: ABI and Address constants
│   ├── core-utils: Low-level utilities that make building Optimism easier
│   ├── fee-estimation: Tools for estimating gas on OP chains
│   ├── sdk: provides a set of tools for interacting with Optimism
│   └── web3js-plugin: Adds functions to estimate L1 and L2 gas
├── proxyd: Configurable RPC request router and proxy
├── specs: Specs of the rollup starting at the Bedrock upgrade
└── ufm-test-services: Runs a set of tasks to generate metrics

Development and Release Process

Overview

Please read this section if you're planning to fork this repository, or make frequent PRs into this repository.

Production Releases

Production releases are always tags, versioned as <component-name>/v<semver>. For example, an op-node release might be versioned as op-node/v1.1.2, and smart contract releases might be versioned as op-contracts/v1.0.0. Release candidates are versioned in the format op-node/v1.1.2-rc.1. We always start with rc.1 rather than rc.

For contract releases, refer to the GitHub release notes for a given release, which will list the specific contracts being released—not all contracts are considered production ready within a release, and many are under active development.

Tags of the form v<semver>, such as v1.1.4, indicate releases of all Go code only, and DO NOT include smart contracts. This naming scheme is required by Golang. In the above list, this means these v<semver releases contain all op-* components, and exclude all contracts-* components.

op-geth embeds upstream geth’s version inside it’s own version as follows: vMAJOR.GETH_MAJOR GETH_MINOR GETH_PATCH.PATCH. Basically, geth’s version is our minor version. For example if geth is at v1.12.0, the corresponding op-geth version would be v1.101200.0. Note that we pad out to three characters for the geth minor version and two characters for the geth patch version. Since we cannot left-pad with zeroes, the geth major version is not padded.

See the Node Software Releases page of the documentation for more information about releases for the latest node components. The full set of components that have releases are:

  • chain-mon
  • ci-builder
  • ci-builder
  • indexer
  • op-batcher
  • op-contracts
  • op-challenger
  • op-heartbeat
  • op-node
  • op-proposer
  • op-ufm
  • proxyd
  • ufm-metamask

All other components and packages should be considered development components only and do not have releases.

Development branch

The primary development branch is develop. develop contains the most up-to-date software that remains backwards compatible with the latest experimental network deployments. If you're making a backwards compatible change, please direct your pull request towards develop.

Changes to contracts within packages/contracts-bedrock/src are usually NOT considered backwards compatible. Some exceptions to this rule exist for cases in which we absolutely must deploy some new contract after a tag has already been fully deployed. If you're changing or adding a contract and you're unsure about which branch to make a PR into, default to using a feature branch. Feature branches are typically used when there are conflicts between 2 projects touching the same code, to avoid conflicts from merging both into develop.

License

All other files within this repository are licensed under the MIT License unless stated otherwise.

optimism's People

Contributors

ajsutton avatar ben-chain avatar cfromknecht avatar clabby avatar dependabot[bot] avatar elenadimitrova avatar felipe-op avatar gakonst avatar geohot avatar github-actions[bot] avatar hamdiallam avatar inphi avatar karlfloersch avatar maurelian avatar mdehoog avatar mergify[bot] avatar mslipper avatar norswap avatar optimismbot avatar protolambda avatar qbzzt avatar refcell avatar roninjin10 avatar sebastianst avatar smartcontracts avatar snario avatar spacesailor24 avatar tremarkley avatar trianglesphere avatar tynes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

optimism's Issues

Revive the monorepo

Is your feature request related to a problem? Please describe.

Developers have a hard time working with the submodules in optimism-integration. In addition, the fragmentation across repositories has caused us to have cyclical dependencies and creates large operational overheads when making changes which require touching many repositories.

Describe the solution you'd like

We should revive the monorepo format and move away from the submodulesintegrations

There's 4 logical groups for our codebases:

  • core: geth (go), core-utils (ts), contracts (sol/ts)
  • services (all ts): dtl, ctc/scc batch submitter, relayer, watcher
  • tooling such as smock/plugins
  • integration tests
  • non-core forks: ethereumjs-ovm, solc

A structure could be that this repository includes the services and tooling, and the core and integration tests codebases are grouped in the go-ethereum repository. Another structure could be to bundle everything in 1 repository (incl. geth w/ everything else). That way, the optimism-integration repository would no longer use submodules, and we'd be able to bundle the integration tests in a single-repo developer workflow (vs having to juggle many PRs right now). We agreed to get rid of ethereumjs-ovm, and I think it's OK to let solc remain in a separate repository.

There's a few constraints to take into account:

  • Developer Experience should be as close as today, where a user can just ./up.sh and it "just works"
  • The "Docker build ⇒ tag ⇒ release ⇒ deploy" flow should remain intact, so that our devops can still be fully automated

`watcher` is missing type declarations

Describe the bug
@eth-optimism/watcher misses TS's type declarations (d.ts files).

They are specified in package.json (types field) but are not part of the final package uploaded to npm.

Additional context
Probably files in package.json should be changed from:

"files": [
    "build/**/*.js"
  ],

to:

"files": [
    "build/**/*.js",
	 "build/**/*.ts"
  ],

Batch submitter sometimes reverts with `block number is from the future`

Describe the bug
The batch submitter is reverting when calling getNextPendingQueueElement

To Reproduce
Steps to reproduce the behavior:

  1. yarn hardhat node --no-deploy --fork <mainnet>
  2. Impersonate account for mainnet batch submitter address 0xfd7D4de366850C08EE2CBa32d851385A3071Ec8D
  3. Run transaction batch submitter with supplied config

Expected behavior
The batch should be submitted.

Logs

"id\":75,\"jsonrpc\":\"2.0\"}","requestMethod":"POST","url":"http://localhost:8545","stack":"Error: processing response error (body=\"{\\\"jsonrpc\\\":\\\"2.0\\\",\\\"id\\\":75,\
\\"error\\\":{\\\"code\\\":-32603,\\\"message\\\":\\\"VM Exception while processing transaction: revert Context block number is from the future.\\\"}}\", error={\"code\":-32603}, requestBody=\"{\\\"method\\\":\\\"eth_estimateGas\\\",\\\"params\\\":[{\\\"
gasPrice\\\":\\\"0x1dcd65000\\\",\\\"from\\\":\\\"0xfd7d4de366850c08ee2cba32d851385a3071ec8d\\\",\\\"to\\\":\\\"0x405b4008da75c48f4e54aa39607378967ae62338\\\",\\\"data\\\":

Config

# L1_NODE_WEB3_URL=
# L1_NODE_WEB3_URL=
# MNEMONIC=

ADDRESS_MANAGER_ADDRESS=0x1De8CFD4C1A486200286073aE91DE6e8099519f1
MIN_L1_TX_SIZE=20000
MAX_L1_TX_SIZE=120000
MAX_TX_BATCH_COUNT=250
MAX_STATE_BATCH_COUNT=1000
MAX_BATCH_SUBMISSION_TIME=900
POLL_INTERVAL=15000
NUM_CONFIRMATIONS=15
FINALITY_CONFIRMATIONS=60
RUN_TX_BATCH_SUBMITTER=true
RUN_STATE_BATCH_SUBMITTER=false
CLEAR_PENDING_TXS=false
SAFE_MINIMUM_ETHER_BALANCE=1
GAS_THRESHOLD_IN_GWEI=250
RESUBMISSION_TIMEOUT=900
MAX_GAS_PRICE_IN_GWEI=250
GAS_RETRY_INCREMENT=10

Calls made to the Execution Manager do not return data

Presently it is impossible to get any return data data from transactions or calls made that go through the ExecutionManager such as revert reasons or other kinds of data geth may want to use. We've made a pull request in the past to address this but decided it was not the right way to go since a change was queued up (and is now implemented) to the Solidity compiler that modifies how the execution manager is interacted with. This issue is to document the work to be done to revive that PR based on the new work in the Solidity compiler.

Using `optimism` repo as a submodule results in docker related errors

Describe the bug
If the optimism repo is used as a submodule to another project, this results in docker-compose errors. The exact problem has not been found, but it appeared to be related to problems with the internal network. The services were not being initialized with the dynamic environment variables, in particular the ADDRESS_MANAGER_ADDRESS

To Reproduce
Steps to reproduce the behavior:

  1. Add this repo to another repo as a submodule and pull/update
  2. Start up the docker-compose service from within the submodule
  3. The services will not properly start

Expected behavior
I don't think using this repo as a submodule should break the docker setup - devs need a way to develop their apps against the system. This is meant to save time for future devs who attempt to use this repo as a submodule

Additional context
It would be nice if we could release an npm package that was a CLI tool that wraps docker-compose so that devs could install that package as part of their project and npx run optimism start or something like that

ovm-toolchain requires node v10 which is way outdated

Describe the bug
ovm-toolchain package requires node v10 which is way outdated, current LTS is v14

Expected behavior
It should allow the installation and use on newer versions of node by specifying the "engine" field to be ">= 10" instead.

Seems to work just fine when force-installed with yarn install --ignore-engines, so it's safe to change.

Observed "deployment.spec.ts" is failing with error "TypeError: Cannot read property 'equal' of undefined"

Hi,

Observed "deployment.spec.ts" is failing with error

TypeError: Cannot read property 'equal' of undefined

Compiled 61 contracts successfully


  Contract Deployment
    deployAllContracts
      1) should deploy contracts in a default configuration


  0 passing (2s)
  1 failing

  1) Contract Deployment
       deployAllContracts
         should deploy contracts in a default configuration:
     TypeError: Cannot read property 'equal' of undefined
      at Object.values.forEach (test/deployment/deployment.spec.ts:36:41)
      at Array.forEach (<anonymous>)
      at Context.it (test/deployment/deployment.spec.ts:34:58)
      at process._tickCallback (internal/process/next_tick.js:68:7)



error Command failed with exit code 1.

And failure was not observed, when expect is imported from setup is modified as below ...

Compiled 61 contracts successfully


  Contract Deployment
    deployAllContracts
gasConsumer::true
l1ToL2TransactionQueue::true
safetyTransactionQueue::true
canonicalTransactionChain::true
stateCommitmentChain::true
stateManager::true
stateManagerGasSanitizer::true
executionManager::true
safetyChecker::true
fraudVerifier::true
rollupMerkleUtils::true
      ✓ should deploy contracts in a default configuration (1014ms)


  1 passing (2s)

Done in 26.03s.

Any idea whats happening behing the scene ...

yarn add @eth-optimism/core-utils breaks

Describe the bug

When running $ yarn add @eth-optimism/core-utils it breaks if there is not a debug dependency in the workspace root or a mocha dependency in the package itself.

Steps to reproduce
Steps to reproduce the behavior:

  1. $ yarn add @eth-optimism/core-utils in a fresh repo

Expected behavior
It should install without any problems

Screenshots
If applicable, add screenshots to help explain your problem.

Specifications

  • Version: "@eth-optimism/core-utils": "^0.0.1-alpha.30"
  • OS: Arch Linux
  • Yarn Version: 1.22.4

Additional context
Add any other context about the problem here.

Script for diff'ing with upstream geth

Due to the monorepo flow, it is now hard to diff the l2geth with upstream geth from inside the repository. We should have a script which allows giving a path to a local geth (or a remote geth tag) and prints out the diff from current version.

README improvements

  • Some readmes are still pointing to the old repos (e.g. Contracts)
  • It'd be nice if we added some "common failure modes" and ways to work around them (e.g. setting the RETRIES higher when bringing up the docker stack)
  • Showing how to bring up individual components of the docker stack and showing how to configure them in a more granular way

Only package left is `watcher`

The only package remaining in this repo is the watcher. Should this be moved to its own repo? It also has no unit tests but is included in the integration tests and the integration tests would fail if this was broken

Reduce gas usage of SafetyChecker

  • 30% -- Remove the ops array, replace with (ops <<= 8; ops |= op)
  • 10% -- Replace opBit = 2 ** op; with opBit = 1 << op;
  • ?? -- Replace if (op == 0x00 || op == 0xfd || op == 0xfe || op == 0xf3) with a mask
  • ?? -- Potentially gate the whole entry into the if logics on a mask, this is a big win!
  • ?? -- Cache reads from _bytecode, 1 MLOAD per 32 bytes. Complex and annoying
  • ?? -- Replace complex call logic with a simple AND thanks to new compiler

Remove duplication from CI Release workflow

The Problem

The release code is heavily duplicated, in order to allow for dynamically launching jobs, conditional on the job being output by changesets.

The main challenge is basically having conditional/dynamic logic in the matrix generation, so that the jobs generated only match the published packages (whereas now we have hard-coded the jobs and conditionally trigger them)

The Solution

Ideally, what we'd want, is a dynamic matric using publishedPackages which builds/pushes the services, and also "intelligently" builds the builder container only once (instead of having to re-build it for each service).

Another solution

Maybe we do not need to intelligently build the builder image, and just re-building it once per service is fine.

Enable ability to upgrade code and storage of L2 contracts

Is your feature request related to a problem? Please describe.
We currently do not have any way (other than regenesis) to upgrade our L2 contracts! This is no good as regenesis wipes historical transactions & requires coordination with everyone running nodes. Instead we need a way to cleanly upgrade our contracts that enables minimal downtime, multi-party coordination, and max simplicity.

Describe the solution you'd like
Implement automated OVM self upgrades. See a larger exploration of this topic here: https://www.notion.so/optimismpbc/Project-OVM-Upgradability-a0f2298a5aba469ea5749d6feb1aae3e

A simplified example flow for L2 upgrades here look like this:

  1. Make change to our predeploys in monorepo
  2. Run the deployer (aka chugsplash). This will..
  • Generate a deployment script with all code & state that we want in the deployment based on the deployment config
  • Generate a commitment to the deployment
  • Everyone signs a multi-sig which approves the deployment based on the generated commitment
  • The commitment is deposited into L2 from L1
  • Pause transaction ingestion in L2Geth
  • On L2 the upgrader contract calls setCode and setStorage based on the information in the commitment.
  • Resume transaction ingestion in L2Geth / in the ExecutionManager now that the upgrade has completed.
  1. Celebrate because our upgrades are automated!

TODO:

  • Add an L2 predeploy Upgrader which is authenticated to call setCode and setStorage
  • Update L2Geth to support setCode
  • Write UpgradeExecutor contract for L2 that calls the Upgrader contract based on the deployment commitment that was deposited.
  • Implement deployer (aka chugsplash)
    • Add deployment script generator which generates a deployment script based on the deploy config
    • Add L1 UpgradeExecutor which will just execute an enqueue which initiates the L2 deployment (which will be executed through the L2 UpgradeExecutor)
    • Add L1 / L2 deployment executor JS code which, based on a deployment script, executes the deployment if the deployment has been authorized / initiated.
    • Perform an L2 upgrade using this full system!
  • Update infra & l2geth to ensure upgrades are fully automated & no transactions are sent during an upgrade.
    • Should update that status page to UPGRADE IN PROGRESS or something like that.
  • Ensure it is possible to add env vars which set the environment specific variables like FRUAD_PROOF_WINDOW & SEQUENCER_ADDRESS.

Note on status quo deployment configuration

Our current deployment system splits out deployments into 2 configurations:

  1. Deployment configuration -- deploy files which define the default deployment one would perform. ADDITIONALLY these files make use of environment variables passed in through process.env
  2. Environment configuration -- When we update the contracts or the deployment script, we tag a new release which results in a final infrastructure config which looks similar to this: kovan.json - { image: 'v0.2.0', env: { 'FRUAD_PROOF_WINDOW': 100, ... }} (note I used json because it's easy to write a single line example)

Correcting Fraudulent State Roots Off-Chain

Describe the bug
Right now the next_verification_batch view hints at the idea that there will be a REMOVED state for l1_rollup_state_root_batch entries that follows the FRAUDULENT state, after fraud is proven and state roots are removed from the state commitment chain on Ethereum.

The code does not currently suggest a way to update this batch to UNVERIFIED upon submission of new state roots for the batch in question so that the next_verification_batch view will start returning records again and verification may proceed.

This is a bug report / feature request / bit of documentation of the fact that there will need to be something that rights the fraudulent batch record so verification may proceed. Without being too prescriptive, one way to do this would be to check for a REMOVED batch when persisting l1_rollup_state_root records and update the batch instead of inserting a new one.

Steps to reproduce
Features required to get into this situation have yet to be developed.

Expected behavior

  1. Fraud is committed
  2. Fraud is proven
  3. State roots are removed from the Ethereum chain
  4. State roots & batch are marked as REMOVED in the DB on handling of the removed event on-chain
  5. Valid state roots are submitted on-chain
  6. Some part of the process persisting the new state roots updates the state root batch to UNVERIFIED
  7. Valid (claimed) state roots are verified against computed state roots, and batch is updated to VERIFIED

Run the CI in parallel for speed

Is your feature request related to a problem? Please describe.
The CI often runs slowly because it runs sequentially. The integration tests run faster than the unit tests.

Describe the solution you'd like
Run the CI in parallel using Github Workflows

Geth RPC does not return revert reason when transaction reverts

Describe the bug
When testing for whether an OVM-based transaction will revert, the transaction will revert without the expected error message being thrown. Instead, the following error message is returned from L2 geth.

To Reproduce
Steps to reproduce the behavior:

  1. Git clone Waffle or Truffle repository
  2. Yarn install, yarn compile for the OVM, and run tests
  3. For Waffle, see all but the last 2 reversion tests pass, and for Truffle, see no tests pass since contract deployment fails with insufficient gas error.

Expected behavior
If the transaction fails, L2 geth should get the revert reason returned from the contract instead of throwing an estimate gas error.

Screenshots
Screen Shot 2021-04-17 at 7 59 58 PM

Versioning Weirdness

Describe the bug
Different modules depend on different versions of things and sometimes the way that JS links code together results in errors. I've found deleting the yarn.lock and deleting all node_modules helpful in figuring out what the problem is. Also make sure there are no symlinks (npm link, yarn link)

Verifier Transaction Batch Ordering

Describe the bug
Verifiers will

  1. Passively observe Ethereum contracts
  2. Compute the resulting state of rolled up transactions by using the OVM geth fork
  3. Prove fraud resulting from a mismatch between claimed resulting state and computed resulting state

Batches of rollup transactions will likely contain more than one rollup transaction (that's the whole benefit of Optimistic Rollup), and these transactions will need to be processed by the OVM geth fork in order of batch number, followed by index within batch.

At the moment, the mechanism by which order is enforced in transmitting transactions to the OVM geth fork does not work and assigns all transactions within a batch the same index

Steps to reproduce
Steps to reproduce the behavior:

  1. Run a verifier pointing at contracts with rollup transactions posted (starting at an L1 block number before they were posted)
  2. Verify that l1_rollup_tx entries all have an index_within_submission of 0

Expected behavior
The l1_rollup_tx.index_within_submission should be unique in combination with geth_submission_queue_index, and the first transaction in the rollup batch should get index 0, the second gets index 1, and so on...

Change default transaction encoding to be RLP in order to support value field

Is your feature request related to a problem? Please describe.
We currently do not have the ability to send a value with our transactions. This was due to a custom encoding format which we implemented which didn't have a value field -- that's dumb! Additionally, supporting a custom tx format is annoying for staying 1:1 with eth1.x, especially with the introduction of transaction envelopes.

Once we've added support for RLP transactions we will be in a much better place with our transaction ingestion logic.

Describe the solution you'd like
Pretty much blocked by #495

TODO

  • Update SequencerEntrypoint to decode RLP transactions instead of custom encoding.
  • Update batch submitter to operate on raw transactions
  • Change core-utils (maybe data-transport-layer as well) to use the new RLP encoding format
  • Update l2geth to encode using RLP, not our custom format.

In Depth

Contracts

Implementation: https://github.com/ethereum-optimism/contracts/pull/300/files

Finish and merge this PR.

Batch Submitter

Implementation: ethereum-optimism/batch-submitter#56

Finish and merge this PR.

core-utils

Update the encoding which we use here: https://github.com/ethereum-optimism/optimism/blob/master/packages/core-utils/src/coders/ecdsa-coder.ts#L54

Rename this encoding to EIP155Transaction because right now it's a weird name "ECDSA transaction" which is not descriptive.

data transport layer

Re-integrate: https://github.com/ethereum-optimism/data-transport-layer/pull/72

All contain references to old types. We've got to update these.

l2geth

Remove:

Update:

Bug :: Oversized transaction error

Describe the bug
We often go over the size limit of transactions when submitting batches. This causes our transactions to fail and generally everyone has a bad day.

Steps to reproduce
Steps to reproduce the behavior:

  1. Set the transaction calldata size to a number above 132KB (even slightly below this number)
  2. Run the full system
  3. Observe errors submitting batches

Expected behavior
We should never submit transactions above the maximum size.

Solution

Work-around

Reduce the Max L2 Tx calldata size for a batch:

  1. Update the l2_geth_subscriber 's task definition to have a lower CANONICAL_CHAIN_BATCH_MAX_CALLDATA_BYTES value (right now it's 90,000)
  2. Update the ECS service to reference the new task definition and restart the ECS task so it picks up the new config (stop it, and it'll restart itself)
  3. Run this query to delete created but unsubmitted batches so that they will be rebuilt to fit the lower size limit:
BEGIN;

UPDATE l2_tx_output
SET 
	canonical_chain_batch_number = NULL,
	canonical_chain_batch_index = NULL
WHERE canonical_chain_batch_number >= *batch number that is erroring*;

DELETE FROM canonical_chain_batch
WHERE batch_number >= *batch number that is erroring*;

COMMIT;

Solution

Since it should be predictable, accurately calculate the total transaction bytes for the L1 rollup tx (including all the rolled up L2 Txs' bytes) and create batches such that this number of bytes is less than or equal to the CANONICAL_CHAIN_BATCH_MAX_CALLDATA_BYTES instead of simply making sure that all of the L2 txs being rolled up are under this number.

Provider L1BlockNumber should never be null

Describe the bug
Right now there is logic handling the case when tx.l1BlockNumber is null. It should never be null and instead return the last L1BlockNumber in the case of queue origin sequencer. Right now, the RPC will always return null when queue origin is sequencer.

Document running a replica

Note: this is out of date, see later comments for the v0.3.0 config which is up to date

The instructions for running a replica should be clearly documented in a way that does not go stale over time.
The deployed addresses can be found here: https://github.com/ethereum-optimism/optimism/blob/master/packages/contracts/addresses.md

An example of running a replica for mainnet as of 4/14/21 can be found below. This depends on running a local data transport layer that is syncing from L2. The addresses in the config can be found in the above link, note that the Proxy__* are used instead of the actual contracts themselves. The state dump files are becoming too large to keep in normal git, we need a better way of managing them. Note that both geth and the data transport layer must run together.

#!/bin/bash

USING_OVM=true ./build/bin/geth \
    --datadir $HOME/.ethereum \
    --eth1.syncservice \
    --eth1.l1crossdomainmessengeraddress 0xD1EC7d40CCd01EB7A305b94cBa8AB6D17f6a9eFE \
    --eth1.l1ethgatewayaddress 0xF20C38fCdDF0C790319Fd7431d17ea0c2bC9959c \
    --rollup.addressmanagerowneraddress 0x9BA6e03D8B90dE867373Db8cF1A58d2F7F006b3A \
    --rollup.statedumppath https://storage.googleapis.com/optimism/mainnet/3.json \
    --eth1.ctcdeploymentheight 12207792 \
    --rollup.clienthttp http://localhost:7878 \
    --rollup.verifier \
    --rpc \
    --dev \
    --chainid 10 \
    --rpcaddr 0.0.0.0 \
    --rpcport 8545 \
    --rpcvhosts '*' \
    --rpccorsdomain '*' \
    --ws \
    --wsaddr 0.0.0.0 \
    --wsport 8546 \
    --wsorigins '*' \
    --rpcapi 'eth,net,rollup,web3' \
    --gasprice 0 \
    --miner.gastarget 9000000 \
    --miner.gaslimit 9000000 \
    --nousb \
    --gcmode=archive \
    --ipcdisable \
    --verbosity=3

Config for the data transport layer

DATA_TRANSPORT_LAYER__DB_PATH=<my-db-path>
DATA_TRANSPORT_LAYER__SERVER_HOSTNAME=0.0.0.0
DATA_TRANSPORT_LAYER__SERVER_PORT=7878
DATA_TRANSPORT_LAYER__ADDRESS_MANAGER=0xd3EeD86464Ff13B4BFD81a3bB1e753b7ceBA3A39
DATA_TRANSPORT_LAYER__L1_START_HEIGHT=12207737
DATA_TRANSPORT_LAYER__CONFIRMATIONS=10
DATA_TRANSPORT_LAYER__POLLING_INTERVAL=5000
DATA_TRANSPORT_LAYER__LOGS_PER_POLLING_INTERVAL=2000

DATA_TRANSPORT_LAYER__SYNC_FROM_L1=false
DATA_TRANSPORT_LAYER__SYNC_FROM_L2=true
DATA_TRANSPORT_LAYER__L2_RPC_ENDPOINT=https://mainnet.optimism.io
DATA_TRANSPORT_LAYER__L2_CHAIN_ID=10

DATA_TRANSPORT_LAYER__TRANSACTIONS_PER_POLLING_INTERVAL=1000
DATA_TRANSPORT_LAYER__DANGEROUSLY_CATCH_ALL_ERRORS=true
DATA_TRANSPORT_LAYER__LEGACY_SEQUENCER_COMPATIBILITY=false
DATA_TRANSPORT_LAYER__STOP_L2_DEFAULT_BACKEND=l1

State / Tx batch Ordering Inconsistencies

Describe the bug
State root ordering within state commitment chain batches may differ from the transaction ordering with which the state roots are associated.

The root cause (no pun intended) appears to be that the SQL queries that assemble and order the batches order by different fields that should produce the same result but may not.

Steps to reproduce
Steps to reproduce the behavior:
This is hard to reproduce, as it appears to be non-deterministic and based upon how Postgres assigns the ID of its records, possibly across multiple atomic transactions.

The best way to reproduce is by:

  1. Submitting a few hundred transactions in a matter of seconds to the L2 Node without the transaction batch creator or state commitment batch creator running but with the L2 Chain Data Persister running
  2. Continue (1) until you run into a situation where there is at least one l2_tx_output record that does not appear in the same index when ordering the table's records by ID vs by block_number
  3. Run the state commitment batch creator & transaction batch creator
  4. See that the transaction in question appears at a different cumulative index when batched

Expected behavior
State commitment batches have state roots which exactly correspond in number and order to the transactions within the transaction batch(es) they are committing to.

Screenshots
N/A

Specifications

Distinguish development, test, and prod environment for services

Is your feature request related to a problem? Please describe.
Our services (at least batch-submitter and data-transport-layer) are currently environment agnostic. However, being environment-aware helps us conditionally launch monitoring apps (eg Sentry and prom-client) and send tags to them accordingly.

Describe the solution you'd like
Details need to be spiked out, but ideally during application start we can retrieve some variable for which environment the services runs in. We should also be able to tag logs with this environment.

Describe alternatives you've considered
If adding overall environment management is too much of an overhead, we could temporarily use env vars for these services such as ENABLE_SENTRY=true

Additional context
Currently only related to TypeScript services, but could be extended to other code we run.

Create clear release README

Port the deployments folder into an easy to view README with all L1 contract addresses, L2 contract addresses, and RPC information for Kovan and Mainnet

Upgrade the Solidity compiler to simplify pre-deploy contract code

Our predeploys are currently written with an extremely error prone pattern using the SafeExecutionManagerWrapper. An example can be found here:

Lib_SafeExecutionManagerWrapper.safeSSTORE(
KEY_INITIALIZED,
Lib_Bytes32Utils.fromBool(true)
);
Lib_SafeExecutionManagerWrapper.safeSSTORE(
KEY_OWNER,
Lib_Bytes32Utils.fromAddress(_owner)
);
Lib_SafeExecutionManagerWrapper.safeSSTORE(
KEY_ALLOW_ARBITRARY_DEPLOYMENT,
Lib_Bytes32Utils.fromBool(_allowArbitraryDeployment)
);

This (anti) pattern was particularly tricky when implementing RLP transactions because we need to compile REVERT and that proved difficult for the RLP contracts. This change is therefore blocking RLP.

Describe the solution you'd like
Implement kall which allows contracts to call low level execution manager functions when compiled with the normal OVM compiler.

TODO:

  • Update solc to include kall - ethereum-optimism/solidity#22
  • Update predeploys to use kall and test them in the integration-tests vs in the contracts package. - #475

SparseMerkleTree does not implement a sparse merkle tree

Describe the bug
This is not a sparse merkle tree.

Sparse merkle trees definitionally have 2**n nodes at each level of the tree (where n is the 0-indexed depth). If you intend to build a sparse merkle, you ought not pad.

If you intend to build a regular merkle that allows incremental storage of nodes, you ought not use the default hash mechanism (bytes32(height) would be sufficient padding).

Also, please consider tagging your interior nodes

Expected behavior
Name should reflect common definition of sparse merkles

Error using mnemonic for L2 wallet generation

Describe the bug
Rollup-fullnode fails to deploy the execution manager using mnemonic with error:

(node:34) UnhandledPromiseRejectionWarning: Error: missing provider
at Wallet.sendTransaction (/mnt/full-node/node_modules/ethers/wallet.js:107:19)
at ContractFactory.deploy (/mnt/full-node/node_modules/ethers/contract.js:683:28)
at Object.deployContract (/mnt/full-node/packages/rollup-full-node/src/app/util/utils.ts:120:34)
at deployExecutionManager (/mnt/full-node/packages/rollup-full-node/src/app/util/l2-node.ts:203:44)
at getExecutionManagerContract (/mnt/full-node/packages/rollup-full-node/src/app/util/l2-node.ts:183:30)
at processTicksAndRejections (internal/process/task_queues.js:86:5)

Steps to reproduce
Steps to reproduce the behavior:

  1. yarn build
  2. add L2_WALLET_MNEMONIC env variable to the rollup-full-node service in the docker-compose.yml
  3. docker-compose up

Expected behavior
Everything starts up as exptected

Additional context
I've actually found the bug in the code:
packages/rollup-full-node/src/app/util/l2-node.ts

    ...
    else if (!!Environment.l2WalletMnemonic()) {
      wallet = Wallet.fromMnemonic(Environment.l2WalletMnemonic())
      wallet.connect(provider)
      log.info(`Initialized wallet from mnemonic. Address: ${wallet.address}`)
    }
    ...

at line 111 wallet.connect(provider) should be wallet = wallet.connect(provider)

Reapply git history of packages within monorepo

To expedite this repository, we didn't add the git commit history of the prior repositories. This isn't an urgent issue, but is something we should resolve soon to ensure we keep that history.

ETH deposits all return status 0 (Revert)

Despite succeeding (ETH actually deposits and the correct events are emitted), ETH deposits show status: 0 when they are processed on L2.

Steps to reproduce the behavior:

  1. Run native eth integration tests
  2. Check getTransactionReceipt on the L2 tx hash of a completed ETH deposit.

Intermittent CI Timeout

Describe the bug
Looks like our CI sometimes times out -- I've noticed it twice. Might be on different tests but the test that just timed out was:

Steps to reproduce
Steps to reproduce the behavior:

  1. Push changes to master
  2. Watch the CI fail

Expected behavior
Passing CI

Screenshots
Screen Shot 2020-10-01 at 1 58 50 PM

Question :: Howto enable debug traces

Hey There,

Very basic question, tried to execute the test with command "yarn run test ./test/contracts/ovm/execution-manager/ExecutionManager.executeCall.spec.ts ", but I don't know howto activate the internal traces both from *.sol & *.ts

Long console log in stripped form ...

vagrant@vagrant:/vagrant/ws-020-blocks/demo-002-optimism/optimism-monorepo/packages/contracts$ yarn run test ./test/contracts/ovm/execution-manager/ExecutionManager.executeCall.spec.ts 
yarn run v1.22.5
$ yarn run test:contracts ./test/contracts/ovm/execution-manager/ExecutionManager.executeCall.spec.ts
$ cross-env SOLPP_FLAGS="FLAG_IS_TEST,FLAG_IS_DEBUG" buidler test --show-stack-traces ./test/contracts/ovm/execution-manager/ExecutionManager.executeCall.spec.ts
Compiling...

Compiled 61 contracts successfully


  Execution Manager -- TX/Call Execution Functions
    executeNonEOACall
      ✓ fails if the provided timestamp is 0 (118ms)
      ✓ properly executes a raw call -- 0 param (151ms)
    executeEOACall
      ✓ properly executes a raw call -- 0 param (303ms)
      ✓ increments the senders nonce (189ms)
      ✓ properly executes a raw call -- 1 param (265ms)
      ✓ reverts when it makes a call that reverts
      ✓ reverts when it makes a call that fails a require (218ms)


  7 passing (13s)

Done in 37.86s.

Any configuration missing ?

Thanks

Unified `Config` object

Is your feature request related to a problem? Please describe.
There are many environment variables managed across the codebases and there is a TypeScript Config class here: https://github.com/ethereum-optimism/optimistic-rollup-integration-tests/blob/master/packages/integration-tests/src/config.ts

Having something like this that is fully fleshed out and exported from one of the NPM packages would help to make configuration a lot easier and unified between all of the different repos/tests.

Describe the solution you'd like
A Config class that is aware of all of the environment variables used by the various repos and implements getters for them.

Additional context
There are a few of these Config classes around that are implemented for the env vars it cares about

Pull in example repositories

Currently, we have 3 kinds of examples:

These should all be pulled in this repo under an examples/ directory, and tested in CI. IMO showing the step-by-step evolution of the example repo is better done as content in a blogpost/for the community hub, and not for the github repo

Nonce too low error

Describe the bug
The Optimism Geth node returns an error while trying to trace some transactions. The error is:

{"code":-32000,"message":"tracing failed: nonce too low"}}
To Reproduce
Run this query that is trying to get traces for tx 0x8ed7d752c57d29bfbae73f8c80efe197a904ef44834865594589c4e2cc35f5b7:

curl -X POST --data '{"jsonrpc": "2.0", "method": "debug_traceTransaction", "params": ["0x8ed7d752c57d29bfbae73f8c80efe197a904ef44834865594589c4e2cc35f5b7", {"tracer": "callTracer"}], "id": 1}' -H "Content-Type: application/json" <provider_uri>

The same behaviour can be obtained with some other txs, like this: 0x0b9b8aebd6dc5ac693ea3337feec05e2472e968614d425f789949ffbcb63ccf6, and also while using debug_traceBlockByNumber method for blocks that contain such txs

Expected behavior
Node returns traces for txs that are already included in the chain

Cache CI Layers

Currently, the docker images are always re-built from scratch. Ideally, we should be able to cache the layers after each CI build, so that it does not take 10m to run a cold build to run integration tests.

Improve L1 Contract Upgradability

Is your feature request related to a problem? Please describe.
In #498 we implement the new deployment system (code name chugsplash) that automates / simplifies much of our upgrade process in L2. In order to avoid having two totally different deployment systems, one for L1 contracts & one for L2 contracts, we need to add an L1 backend for our new deployer.

Describe the solution you'd like
Add delegate proxies in front of all of our L1 contracts, and then use the exact same flow defined here: #498 but for L1 contract upgrades.

TODO:

  • Migrate all L1 contracts to use upgradable proxies.
    • These proxies must expose setCode and setStorage as well as pause which pauses the contracts during an upgrade
  • Modify the "chugsplash" tooling to use the same deployment API that we use on L2
    • Write an L1 UpgradeExecutor which calls setCode & setStorage on all of our L1 proxies
  • Ensure all of the tooling that we build for L2 works well during an L1 upgrade as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.