Giter Site home page Giter Site logo

aurora-is-near / aurora-engine Goto Github PK

View Code? Open in Web Editor NEW
323.0 27.0 78.0 13.4 MB

⚙️ Aurora Engine implements an Ethereum Virtual Machine (EVM) on the NEAR Protocol.

Home Page: https://doc.aurora.dev/develop/compat/evm

Rust 98.25% Solidity 1.48% JavaScript 0.25% Shell 0.03%

aurora-engine's Introduction

Aurora Engine

Project license Discord Lints Tests Builds

Aurora Engine implements an Ethereum Virtual Machine (EVM) on the NEAR Protocol. See doc.aurora.dev for additional documentation.

Deployments

Network Contract ID Chain ID
Mainnet aurora 1313161554
Testnet aurora 1313161555
Local aurora.test.near 1313161556

Development

Prerequisites

  • Node.js (v14+)
  • cargo-make
cargo install --force cargo-make

Prerequisites for Development

  • Node.js (v14+)
  • Docker
  • cargo-make

Development

Branches

  • master is the current stable branch. It must be ready, at all times, to be deployed on chain at a moment's notice.

  • develop is our bleeding-edge development branch. In general, kindly target all pull requests to this branch.

Building & Make Commands

Every task with cargo make must have a --profile argument.

The current available profiles are:

  • mainnet: suitable for mainnet.
  • mainnet-silo: silo contract suitable for mainnet.
  • testnet: suitable for testnet.
  • testnet-silo: silo contract suitable for testnet.
  • local: suitable for local development.
  • custom: suitable for custom environments, see note below.

In some circumstances, you may require a custom environment. This can be created in the .env folder as custom.env following the structure of the other .env files. See bin/local-custom.env for more details.

Every make invocation must follow the following pattern, though --profile is not required in all cases (such as cleanup):

cargo make [--profile <profile>] <task>

Building the engine and contracts

There are several commands that can be used to build the binaries. The currently supported parameters for the task field are listed below:

  • default: does not need to be specified, runs build. Requires a --profile argument.
  • build: builds all engine smart contract and produces the aurora-<profile>-test.wasm in the bin folder. Requires build-contracts. Requires a --profile argument.
  • build-test: builds all the below using test features. Requires a --profile argument.
  • build-contracts: builds all the ETH contracts.
  • build-docker: builds the aurora-<profile>-test.wasm in the bin folder using docker build environment. The purpose of this task is to produce reproducible binaries.

For example, the following will build the mainnet debug binary:

cargo make --profile mainnet build

Verifying binary hash

To verify that a deployed binary matches the source code, you may want build it reproducibly and then verify that the SHA256 hash matches that of the deployed binary. The motivation behind this is to prevent malicious code from being deployed.

Run these commands to produce the binary hash:

cargo make --profile <profile> build-docker
shasum -a 256 bin/aurora-<profile>.wasm

Running unit & integration tests

To run tests, there are a few cargo make tasks we can run:

  • test: tests the whole cargo workspace and ETH contracts. Requires a --profile argument.
  • test-workspace: tests only the cargo workspace.
  • test-contracts: tests only the contracts.
  • test: tests the whole cargo workspace, ETH contracts and runs modexp benchmarks. Requires a --profile argument.
  • test-flow: tests the whole cargo workspace and ETH contracts. Requires a --profile argument.
  • bench-modexp: runs modexp benchmarks. Requires a --profile argument.

For example, the following will test the whole workspace and ETH contracts:

cargo make --profile mainnet test 

Running checks & lints

The following tasks are available to run lints and checks:

  • check: checks the format, clippy and ETH contracts.
  • check-contracts runs yarn lints on the ETH contracts.
  • check-fmt: checks the workspace Rust format only.
  • clippy: checks the Rust workspace with clippy only.

For example, the following command will run the checks. profile is not required here:

cargo make check

Cleanup

The following tasks are available to clean up the workspace:

  • clean: cleans all built binaries and ETH contracts.
  • clean-cargo: cleans with cargo.
  • clean-contracts: cleans the ETH contracts.
  • clean-bin: cleans the binaries.

Additionally, there is also but not included in the clean task:

  • sweep: sweeps the set amount of days in the ENV, default at 30 days.

For example, the following command will clean everything. profile is not required:

cargo make clean

License

aurora-engine has multiple licenses:

  • All crates except engine-test has CCO-1.0 license
  • engine-test has GPL-v3 license

aurora-engine's People

Contributors

0x3bfc avatar aleksuss avatar andrcmdr avatar artob avatar birchmd avatar casuso avatar dependabot[bot] avatar evgenykuzyakov avatar forwardslashback avatar frankbraun avatar guidovranken avatar hskang9 avatar i-fix-typos avatar ilblackdragon avatar joshuajbouw avatar karim-en avatar lempire123 avatar matklad avatar mfornet avatar mrlsd avatar raventid avatar romanhodulak avatar sept-en avatar strokovok avatar vimpunk avatar vzctl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aurora-engine's Issues

Profile and optimize generational storage

Per the discussion on PR #87 we would like to not need to get_generation for each call to storage in the EVM backend.

This issue is to for the following:

  • Profile the gas usage of storage calls
  • Improve the performance (either with a caching mechanism or by having a generation variable scoped in SputnikVM calls)
  • Use aforementioned profiling to show the amount of improvement

In improving the performance it might also be worthwhile to try and remove the generation argument from get_storage and set_storage since the following invariant must hold: generation == get_generation(address) (where address is also an argument to get/set_storage), but the current type signatures do not enforce that invariant.

Engine initialization requires two calls

During its development, the eth-connector was essentially treated as a separate contract that happens to be contained inside the main engine. A side effect of this is that it has its own initialization function, which is separate from the one for the engine itself.

However, the connector is not a distinct contract, it is just a part of the engine and having to make two calls to fully initialize the engine is sub-optimal, for example this recently caused an issue with the bully.

The purpose of this issue is to make the call to new alone enough to initialize the contract.

Additionally, there are two prover-related fields: one in the connector and one in the engine state. The latter appears to never be used so I suspect that it could be removed as it is likely a duplicate of the former. This is relevant to this issue because it means the initialization still only needs to take one prover argument.

Note: any change to the contract initialization also needs to be reflected in the aurora.js package. For example, see aurora-is-near/aurora.js#3

Consider building solidity contracts automagically during tests

Today, I was helping joshuajbouw to debug an issue, and I've spend some extra time suffering from "you need make release before cargo test papercut". I spend some time initially setting up the things, and then debugging the fact that my changes to the contract were not picked up by the tests (as I was running cargo test and not make release && cargo test).

This could be avoided if the tests just build the contcrat themselves, a-la

fn get_contract() -> PathBut {
  static ONCE = std::sync::Once::new();
  ONCE.call_once(|| {
    std::process::Command::new("cargo")
      .env("RUSTFLAGS", "--link-arg=-s")
      .args("build --target wasm32-unknown-unknown --release").status().unwrap()
    "./target/release.wasm".to_string()
  })
}

Integration test for upgrading from `master` to `develop`

It would be good to have an integration test that proves we can safely upgrade from current master to develop, keeping in tact any existing state. If this test fails then we know we need to write a state migration in develop. The flow for this test would be something along the lines of

  1. Checkout and build master
  2. Deploy the contract in near-sdk-sim and do some transactions (make some accounts, transfer some eth, etc)
  3. Checkout and build develop
  4. Go through the upgrade process (stage_upgrade then deploy_upgrade)
  5. Ensure state is left in tact and do some more transactions to see it works

Precompile gas tests

Simply, as the used gas is now correctly returned in the precompiles, there should be tests to ensure that the gas tests will always be correct.

Of course, a standard solution would be to simply test that single output. This is static for some methods, but others are a bit dynamic.

Unit testing engine/connector contract code

Presently we can only test contract code via integration tests. Integration tests are obviously important, but I do not think they are a replacement for unit tests. Unit testing encourages good isolation of logic and components so that they can be tested independently. Unit tests are also generally simple to understand, and so can function as a form of documentation for on-boarding new developers to the project (they show how each piece is supposed to work). They also make future modifications to code easier to make because breaking a specific unit test will more clearly show what is broken, as opposed to an integration test which might be more opaque.

The reason unit tests are not possible currently is twofold:

  1. Compilation. Some existing tests depend on std, while the contract code is all no-std. This needs to be reconciled.
  2. Code organization. Calls to sdk functions generally rely on externals which in production are provided to the wasm environment by the Near runtime. In a unit testing environment we should avoid such calls and instead focus on testing pure functions which do not need the context (storage, block height, timestamps, etc) sdk calls provide. This may require refactoring some existing code to isolate the pure logic from the sdk side-effects.

The purpose of this issue is to resolve the two above items. It would also be nice to write a couple new unit tests as well, but complete unit test coverage should probably be a separate issue, lest the work for this one become too large.

`make check` failing due to wasmer error

Getting the following error when trying to run make check on a MacBook Air (M1, 2020)

error[E0425]: cannot find function `get_fault_info` in this scope
   --> /Users/rootling/.cargo/registry/src/github.com-1ecc6299db9ec823/wasmer-runtime-core-near-0.17.1/src/fault.rs:289:21
    |
289 |         let fault = get_fault_info(siginfo as _, ucontext);
    |                     ^^^^^^^^^^^^^^ not found in this scope

   Compiling wasmer-compiler v1.0.2
   Compiling near-vm-errors v3.0.0
   Compiling cranelift-frontend v0.67.0
error: aborting due to previous error

For more information about this error, try `rustc --explain E0425`.
error: could not compile `wasmer-runtime-core-near`

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed

Use custom ERC20 contract in benchmarks

#53 introduces a particular ERC20 that will be used with bridged tokens. We are already measuring the gas usage of ERC20 tokens in the benchmarks, however they currently use a generic ERC20. To get more precise gas measurements for ERC20 transfers the benchmark should be updated to use the actual ERC20 we are using in the EVM.

Add `evm-bully` tests to CI

To prevent regressions and assure performance. They are pretty long running and need large test data sets, so probably we need something else for that than GitHub workflows. Details TBD.

Implement ETH tests to ensure full compatible implementation

As the same with upstream SputnikVM, we should also implement the whole suite of tests.

Upstream only tests for general state changes. We would need to test for more than just that as we have a more complete implementation with precompiles and others. While we do feel confident about our implementation and upstream does take care of the bulk of the main tests, it still would be great to ensure full compatibility as the EVM updates.

This requires a design proposal, as this will be a new library to be made in the discussions.

Optimize gas usage in ecpair precompile

Currently the ecpair precompile cannot be executed because it uses up more than 200 Tgas by itself. The purpose of this issue is to get this number down so that it becomes usable.

See #41 for information about measuring gas usage in pre-compiles.

Engine sub-crates

Idea

To keep the library compartmentalised we should aim to start making some sub-crates. This is quite standard practice in Rust. Right now, we just have a gigantic library with everything in it. Not absolutely required right now, but will be good to tackle it when possible. The longer this task gets pushed to the side, the longer it'll take.

Solution

A typical solution would look like having all the precompiles in their own crate for example.

Notes

  • Related to #56

Crate-wide documentation

This is an issue meant to keep track of the documentation of the whole library until the first release on the main network.

Basic checklist:

  • Crate level docs are thorough and include examples (C-CRATE-DOC)
  • All items have a rustdoc example (C-EXAMPLE)
  • Examples use ?, not try!, not unwrap (C-QUESTION-MARK)
  • Function docs include error, panic, and safety considerations (C-FAILURE)
  • Prose contains hyperlinks to relevant things (C-LINK)
  • Cargo.toml includes all common metadata (C-METADATA)
    • authors, description, license, homepage, documentation, repository, readme, keywords, categories
  • Crate sets html_root_url attribute "https://docs.rs/CRATE/X.Y.Z" (C-HTML-ROOT)
  • Release notes document all significant changes (C-RELNOTES)
  • Rustdoc does not show unhelpful implementation details (C-HIDDEN)

DO NOTE that the checklist above may or may not be applicable as this library is meant to be a WASM binary and not exactly like a library, though it could certainly be published as such in the future.

Strict CI for merging to develop branch

We want to automate deployment to testnet, however we also do not want to accidentally break anything. Therefore, we need strict CI tests which will ensure merging the new code will not break existing features.

Suggested workflow:

  1. Create a staging.aurora account on Near
  2. Deploy the contract to staging.aurora (and initialize the contract)
  3. Run through some transactions:
    a. Mint ERC-20 + Transfer ERC-20
    b. Access eth faucet? (TODO: not sure how this will work with bridged eth yet)
    c. Interact via relayer? (TODO: this would require setting up a new relayer for staging, is this worth it?)

This CI suite will likely take a long time to run, so I don't think it should trigger on every push. It also has a stateful impact on testnet, so multiple of these runs cannot be done in parallel. Maybe we should forbid PRs directly targeting develop and instead let people merge into another branch (e.g. staging) which automatically tries to deploy to testnet once per day, and if it is successful then merges to develop?

Suggestions for the transactions performed, and how to setup the triggering of these tests are welcome.

Refactoring: unwrap- and expect- related traits

Currently, we have ExpectUtf8, SdkUnwrap, SdkExpect defined in src/lib.rs and src/types.rs files.
It makes sense to move this instead to some specific module or even to src/prelude.rs (I'm not sure that prelude is a good place for it though as this will require importing sdk module).

Implement precompiles

Precompiles should be feature flag protected:

  • Current default option should be to include full implementation into the contract right now
  • Alternative implementation is to call SDK for this functions (e.g. when they are available).

This will require to have lots of dependencies, should check if they support no_std. If they are not - will need to stomach std for now.

Better mechanism for capturing gas usage in benchmarks

In #41 gas usage of the benchmarks is simply reported to the terminal. This makes capturing the data and analyzing it a manual process, which is sub-optimal.

The goal of this issue is to find a better way to capture the gas usage data, and to combine it with the timing data automatically in order to draw conclusions.

One possibility might be to use a custom measurement in criterion itself.

Improve CI performance via caching

CI takes a long time to run presently. The majority of this time is spent downloading and compiling rust crates. If we cache cargo artifacts then we should be able to greatly improve the CI performance. The purpose of this issue is to investigate using caching in GitHub actions.

Reduce contract size

In the commit 599156f, the release.wasm contract size was 315,221 bytes. The subsequent commit b3be59b bloated that to 1,434,361 bytes. We'll need to figure out where the bloat comes from and cut it back down to size.

Upgrade logos dependency in lunarity

Our project has a dependency that does not compile on latest nightly

$ cargo +nightly check
   Compiling logos-derive v0.7.7
error[E0061]: this function takes 1 argument but 2 arguments were supplied
   --> /home/birchmd/.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:55:20
    |
55  |             extras.insert(util::ident(&ext), |_| panic!("Only one #[extras] attribute can be declared."));
    |                    ^^^^^^ -----------------  ----------------------------------------------------------- supplied 2 arguments
    |                    |
    |                    expected 1 argument
    |
note: associated function defined here

error[E0061]: this function takes 1 argument but 2 arguments were supplied
   --> /home/birchmd/.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:89:23
    |
89  |                 error.insert(variant, |_| panic!("Only one #[error] variant can be declared."));
    |                       ^^^^^^ -------  -------------------------------------------------------- supplied 2 arguments
    |                       |
    |                       expected 1 argument
    |
note: associated function defined here

error[E0061]: this function takes 1 argument but 2 arguments were supplied
   --> /home/birchmd/.cargo/registry/src/github.com-1ecc6299db9ec823/logos-derive-0.7.7/src/lib.rs:93:21
    |
93  |                 end.insert(variant, |_| panic!("Only one #[end] variant can be declared."));
    |                     ^^^^^^ -------  ------------------------------------------------------ supplied 2 arguments
    |                     |
    |                     expected 1 argument
    |
note: associated function defined here

error: aborting due to 3 previous errors

For more information about this error, try `rustc --explain E0061`.
error: could not compile `logos-derive`

To learn more, run the command again with --verbose.

This logos-derive dependency comes from our dependency on lunarity. Fortunately, we are already working off a fork which we control (it's on Illia's GitHub), and the latest version of logos (v0.12) does compile with latest nightly. So we can resolve this issue by upgrading logos to the latest version in our fork of the lunarity project.

Unfortunately, this upgrade is non-trivial because there were several breaking API changes in logos, so some code in lunarity will need to be refactored to work with the latest version. This issue represents the work to do that refactoring and thus have a version of lunarity that compiles in latest nightly for our project to depend on.

This is probably a low priority item since it will likely not be blocking us until we decide to upgrade our rust-toolchain file to a later version of rust.

Automated deployment to TestNet

We want a GitHub action which triggers on merge to develop which will deploy the new version of the contract to @develop.aurora on TestNet.

  • Need to figure out the security of this (i.e. we do not want it to be possible for anyone to deploy to TestNet, only the GitHub action)
  • We will need more strict CI to ensure we don't accidentally break the testnet deployment. (See #89)

Anything else I've missed? Comments welcome.

SDK separation from functional logic

Currently, a bunch of our code is quite difficult to test due to the requirement of the SDK in many of the functions. Obviously, not impossible. After going through much of the code, I have determined that the SDK logic, as much as possible, should be separated from the Engine.

A possible solution would be to have an SDK layer wrap the functional code. If it is not possible for some of the methods, then keep them in the SDK layer.

Since this would be a rather large refactor, it would be a wise idea to do this on a separate branch and cherry-pick and adapt all new commits as they come in until it is caught up. Upon then, it should be reviewed by all and then merged in.

Library-wide hard fork configuration

Idea

As we start getting deeply into adding hard fork support that is beyond precompiles, we need to have library-wide support for hard forks.

Solution

A possible solution would either to use markers or an enum. An enum uses at a min. of 4 bytes of data, but a marker from how I understand do not use any amount of extra data at all. There should also possibly be different deployment options for deploying different hard forks. This would make it easier to redeploy on other chains if we ever want to expand beyond ETH.

Notes

  • Required for changes such as #165

Understanding precompiles return value

It is hard to tell from this tuple: type PrecompileResult = Result<(ExitSucceed, Vec<u8>, u64), ExitError>; what does the last value (u64) means. I'm assuming that second value Vec<u8> is the output.

Could you clarify what does the last value means?

Also consider changing this tuple to a named struct and/or add some comments documenting this.

Migrate to resolver=2 from `-Zavoid-dev-deps`

Cargo recently stabilized a new, opt-in feature resolution algorithm:

https://doc.rust-lang.org/cargo/reference/features.html#feature-resolver-version-2

It subsumes -Z avoid-dev-deps:

diff --git a/Cargo.toml b/Cargo.toml
index a84a9a2..55db231 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -11,6 +11,7 @@ repository = "https://github.com/aurora-is-near/aurora-engine"
 license = "CC0-1.0"
 publish = false
 autobenches = false
+resolver = "2"
 
 [lib]
 crate-type = ["cdylib", "rlib"]
diff --git a/Makefile b/Makefile
index 6f02e46..80b0062 100644
--- a/Makefile
+++ b/Makefile
@@ -14,7 +14,7 @@ release.wasm: target/wasm32-unknown-unknown/release/aurora_engine.wasm
 	ln -sf $< $@
 
 target/wasm32-unknown-unknown/release/aurora_engine.wasm: Cargo.toml Cargo.lock $(wildcard src/*.rs)
-	RUSTFLAGS='-C link-arg=-s' $(CARGO) build --target wasm32-unknown-unknown --release --no-default-features --features=$(FEATURES) -Z avoid-dev-deps
+	RUSTFLAGS='-C link-arg=-s' $(CARGO) build --target wasm32-unknown-unknown --release --no-default-features --features=$(FEATURES)
 
 debug: debug.wasm
 
@@ -22,7 +22,7 @@ debug.wasm: target/wasm32-unknown-unknown/debug/aurora_engine.wasm
 	ln -sf $< $@
 
 target/wasm32-unknown-unknown/debug/aurora_engine.wasm: Cargo.toml Cargo.lock $(wildcard src/*.rs)
-	$(CARGO) build --target wasm32-unknown-unknown --no-default-features --features=$(FEATURES) -Z avoid-dev-deps
+	$(CARGO) build --target wasm32-unknown-unknown --no-default-features --features=$(FEATURES)
 
 .PHONY: all release debug
 

Fill in the backend engine implementation.

The engine implementation is sparse and filled with quite a bit of dummy data. This may take a bit of research to find the correct or similar methods on NEAR.

See impl evm::backend::Backend for Engine { in engine.rs

Implement base token using FT standard

near/near-evm#64 (comment)

This modifies functions like deposit(receiver, amount) and withdraw(receiver, amount) to only be allowed to be called from specific contract. This requires a bit more specification though.

Otherwise make sure that interface compliant with NEP-141 standard.

EthConnector: remove deprecated EIP-712 implementation

Currently, prover.rs file contains EIP-712 implementation that was previously used as a design approach for exits from Aurora. It's not used anywhere anymore so it makes sense to remove it.

Or do we need it for any future implementations?

Newtype for balances in EVM

It's easy to accidentally mess up amounts in EVM, even for experts like ourselves (e.g. #57 (comment) ). Also, U256 represents lots of different (not interchangeable) quantities: account nonces, account balances (in Wei), ERC20 token balances (of various sorts!), etc.

To make sure we get account balances right we should have a newtype to represent amounts in Wei, and ensure all function arguments that represent balances use this type. It should also include convenience functions for unit conversion.

E.g.

pub struct Wei(U256);

impl Wei {
    const ETH_TO_WEI: U256 = U256::from(1_000_000_000_000_000_000);

    pub fn new(amount: U256) -> Self {
        Self(amount)
    }

    pub fn from_eth(amount: U256) -> Option<Self> {
        amount.checked_mul(Self::ETH_TO_WEI).map(Self)
    }
}

Correctly handle gas limits on transactions

Presently we allow any gas limit to be specified on a transaction passed to submit. This is incorrect behaviour relative to Ethereum where there is a notion of "intrinsic gas".

The current behaviour is causing a problem because (a) it is an incompatibility with Ethereum, and we are always pushing for maximum compatibility; and (b) the relayer has trouble handling transactions with 0 gas limit (and these would be invalid transactions if we had the correct logic in the Engine).

This issue is completed when the following is done:

  • Have a function which computes "intrinsic gas" for transactions
  • Verify all transactions passed via submit have a gas limit greater than or equal to the intrinsic gas
  • Respect the gas limit when processing transactions passed to submit (i.e. fail with OOG when computation exceeds limit)

References:

Constant errors

Strings can easily hog up a lot of the bytecode that is produced in the WASM file. There are a lot of errors that could use the same error kinds.

A probable solution would be to define library-wide errors in an error.rs file. Then errors that have to be related to other parts of the code directly should be placed at the top of that file to keep them together with that part of the code and re-exported to the error.rs file. This also has the additional benefit of providing simple reference material to look up all errors in the library in one module.

This is related in part to #7, as this will help reduce contract code.

Potential collision with precompiles in address space

To detect if a call was directed to a precompile we are using the following code:

https://github.com/aurora-is-near/aurora-engine/blob/master/src/precompiles/mod.rs#L125

    match address.to_low_u64_be() {
        1 => Some(ECRecover::run(input, target_gas, context)),
        2 => Some(SHA256::run(input, target_gas, context)),
        3 => Some(RIPEMD160::run(input, target_gas, context)),
        4 => Some(Identity::run(input, target_gas, context)),
        5 => Some(ModExp::<Byzantium>::run(input, target_gas, context)),
        6 => Some(BN128Add::<Istanbul>::run(input, target_gas, context)),
        7 => Some(BN128Mul::<Istanbul>::run(input, target_gas, context)),
        8 => Some(BN128Pair::<Istanbul>::run(input, target_gas, context)),
        9 => Some(Blake2F::run(input, target_gas, context)),
        // Not supported.
        _ => None,
    }

Notice that we are truncating the address (20 bytes) to the last (8 bytes) and we check if it matches any of the precompiles. I think we should not truncate the address and check directly with the 20 bytes, otherwise we are exposing ourselves to collisions between real deployed addresses and precompiles. 64 bits has still some room so collisions should not happen naturally, but I think it is not unreasonable to believe that it is easy to create an address with last 8 bytes to match the one of our precompiles.

I haven't reason about security implications about it, but let's avoid hitting this issue anyway.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.