Giter Site home page Giter Site logo

ziyadedher / evm-bench Goto Github PK

View Code? Open in Web Editor NEW
189.0 3.0 16.0 2.04 MB

๐Ÿš€๐Ÿช‘ evm-bench is a suite of Ethereum Virtual Machine stress tests and benchmarks.

License: GNU General Public License v3.0

Solidity 42.32% Shell 1.70% Rust 38.13% Python 7.47% Go 3.71% C++ 3.37% TypeScript 2.04% CMake 1.27%
benchmark benchmark-framework ethereum ethereum-contract evm evm-bytecode smart-contracts

evm-bench's Introduction

evm-bench

Rust

evm-bench is a suite of Ethereum Virtual Machine (EVM) stress tests and benchmarks.

evm-bench makes it easy to compare EVM performance in a scalable, standardized, and portable way.

evmone revm pyrevm geth py-evm.pypy py-evm.cpython ethereumjs
sum 66ms 84.8ms 194ms 235ms 7.201s 19.0886s 146.3218s
relative 1.000x 1.285x 2.939x 3.561x 109.106x 289.221x 2216.997x
erc20.approval-transfer 7ms 9.6ms 16.2ms 17ms 425.2ms 1.13s 2.0006s
erc20.mint 5ms 6.4ms 14.8ms 17.2ms 334ms 1.1554s 3.1352s
erc20.transfer 8.6ms 11.6ms 22.8ms 24.6ms 449.2ms 1.6172s 3.6564s
snailtracer 43ms 53ms 128ms 163ms 5.664s 13.675s 135.059s
ten-thousand-hashes 2.4ms 4.2ms 12.2ms 13.2ms 328.6ms 1.511s 2.4706s

To reproduce these results, check out usage with the evm-bench suite below.

Technical Overview

In evm-bench there are benchmarks and runners:

  • Benchmarks are expensive Solidity contracts paired with configuration.
  • Runners are consistent platforms for deploying and calling arbitrary smart contracts.

The evm-bench framework can run any benchmark on any runner. The links above dive deeper into how to build new benchmarks or runners.

Usage

With the evm-bench suite

Simply cloning this repository and running RUST_LOG=info cargo run --release -- will do the trick. You may need to install some dependencies for the benchmark build process and the runner execution.

With another suite

evm-bench is meant to be used with the pre-developed suite of benchmarks and runners in this repository. However, it should work as an independent framework elsewhere.

See the CLI arguments for evm-bench to figure out how to set it up! Alternatively just reach out to me or post an issue.

Development

Do it. Reach out to me if you wanna lend a hand but don't know where to start!

evm-bench's People

Contributors

chfast avatar rakita avatar renovate[bot] avatar valo avatar yilongli avatar ziyadedher avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

evm-bench's Issues

Implement runners via Dockerfiles

Right now evm-bench just expects an executable that implements the runner interface. So I guess the executable could call docker and do this. But runners typically just go raw.

I imagine an improvement would be to use mandatory Dockerfiles that can be built and run with an entry point that satisfies the runner interface. The metadata file for runners could point to that Dockerfile.

Some benefits:

  • More standardized environment. For example, it becomes a lot easier to make sure the same Python version is running benchmarks.
  • Less dependency management. No need to have poetry or go or whatever toolchain installed to build runners anymore!

Do we need to care about performance? I don't think this is that big of a deal since things are mostly meant to be taken relatively anyway. So as long as we're consistent, we're good.

Fail to run all benchmarks

I was just wondering whether it is possible to help to address the build issue.

Environment: Ubuntu20.04 X86 Rust: x86_64-unknown-linux-gnu rustc v1.70.0
Here are my build commands

git clone https://github.com/ziyadedher/evm-bench.git
git checkout  6d26d92 # the latest commit 

curl -sSL https://install.python-poetry.org | python3 - && export PATH="$HOME/.local/bin:$PATH"` #  install the dependency poetry
sudo apt install pypy3 -y` #  install the dependency pypy3

RUST_LOG=info cargo run --release -- # run

Although all benchmarks are built successfully, none of the eight runners can evaluate any benchmark. Here is the output I obtained.

[2023-06-02T09:17:35Z INFO  evm_bench::metadata] found 5 benchmarks: erc20.mint, snailtracer, erc20.transfer, ten-thousand-hashes, erc20.approval-transfer
[2023-06-02T09:17:35Z INFO  evm_bench::metadata] found 8 runners: py-evm.pypy, ethereumjs, akula, evmone, geth, py-evm.cpython, pyrevm, revm
[2023-06-02T09:17:35Z INFO  evm_bench::build] building 5 benchmarks...
[2023-06-02T09:17:35Z INFO  evm_bench::build] building benchmark erc20.approval-transfer (ERC20ApprovalTransfer.sol w/ solc@stable)...
[2023-06-02T09:17:37Z INFO  evm_bench::build] building benchmark erc20.mint (ERC20Mint.sol w/ solc@stable)...
[2023-06-02T09:17:39Z INFO  evm_bench::build] building benchmark erc20.transfer (ERC20Transfer.sol w/ solc@stable)...
[2023-06-02T09:17:41Z INFO  evm_bench::build] building benchmark snailtracer (SnailTracer.sol w/ [email protected])...
[2023-06-02T09:17:44Z INFO  evm_bench::build] building benchmark ten-thousand-hashes (TenThousandHashes.sol w/ solc@stable)...
[2023-06-02T09:17:55Z INFO  evm_bench::run] running 5 benchmarks...
[2023-06-02T09:17:55Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on 8 runners...
[2023-06-02T09:17:55Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner akula...
[2023-06-02T09:18:17Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner akula: exit status: 101
[2023-06-02T09:18:17Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner ethereumjs...
[2023-06-02T09:18:18Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner ethereumjs: exit status: 254
[2023-06-02T09:18:18Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner evmone...
[2023-06-02T09:18:20Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner evmone: exit status: 1
[2023-06-02T09:18:20Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner geth...
[2023-06-02T09:18:20Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner geth: exit status: 1
[2023-06-02T09:18:20Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner py-evm.cpython...
[2023-06-02T09:18:20Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner py-evm.cpython: exit status: 1
[2023-06-02T09:18:20Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner py-evm.pypy...
[2023-06-02T09:18:20Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner py-evm.pypy: exit status: 1
[2023-06-02T09:18:20Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner pyrevm...
[2023-06-02T09:18:21Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner pyrevm: exit status: 1
[2023-06-02T09:18:21Z INFO  evm_bench::run] running benchmark erc20.approval-transfer on runner revm...
[2023-06-02T09:18:35Z WARN  evm_bench::run] could not run benchmark erc20.approval-transfer on runner revm: exit status: 101
[2023-06-02T09:18:35Z INFO  evm_bench::run] running benchmark erc20.mint on 8 runners...
[2023-06-02T09:18:35Z INFO  evm_bench::run] running benchmark erc20.mint on runner akula...
[2023-06-02T09:18:40Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner akula: exit status: 101
[2023-06-02T09:18:40Z INFO  evm_bench::run] running benchmark erc20.mint on runner ethereumjs...
[2023-06-02T09:18:42Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner ethereumjs: exit status: 254
[2023-06-02T09:18:42Z INFO  evm_bench::run] running benchmark erc20.mint on runner evmone...
[2023-06-02T09:18:43Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner evmone: exit status: 1
[2023-06-02T09:18:43Z INFO  evm_bench::run] running benchmark erc20.mint on runner geth...
[2023-06-02T09:18:44Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner geth: exit status: 1
[2023-06-02T09:18:44Z INFO  evm_bench::run] running benchmark erc20.mint on runner py-evm.cpython...
[2023-06-02T09:18:44Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner py-evm.cpython: exit status: 1
[2023-06-02T09:18:44Z INFO  evm_bench::run] running benchmark erc20.mint on runner py-evm.pypy...
[2023-06-02T09:18:44Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner py-evm.pypy: exit status: 1
[2023-06-02T09:18:44Z INFO  evm_bench::run] running benchmark erc20.mint on runner pyrevm...
[2023-06-02T09:18:45Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner pyrevm: exit status: 1
[2023-06-02T09:18:45Z INFO  evm_bench::run] running benchmark erc20.mint on runner revm...
[2023-06-02T09:18:45Z WARN  evm_bench::run] could not run benchmark erc20.mint on runner revm: exit status: 101
[2023-06-02T09:18:45Z INFO  evm_bench::run] running benchmark erc20.transfer on 8 runners...
[2023-06-02T09:18:45Z INFO  evm_bench::run] running benchmark erc20.transfer on runner akula...
[2023-06-02T09:18:55Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner akula: exit status: 101
[2023-06-02T09:18:55Z INFO  evm_bench::run] running benchmark erc20.transfer on runner ethereumjs...
[2023-06-02T09:18:57Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner ethereumjs: exit status: 254
[2023-06-02T09:18:57Z INFO  evm_bench::run] running benchmark erc20.transfer on runner evmone...
[2023-06-02T09:18:58Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner evmone: exit status: 1
[2023-06-02T09:18:58Z INFO  evm_bench::run] running benchmark erc20.transfer on runner geth...
[2023-06-02T09:18:58Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner geth: exit status: 1
[2023-06-02T09:18:58Z INFO  evm_bench::run] running benchmark erc20.transfer on runner py-evm.cpython...
[2023-06-02T09:18:59Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner py-evm.cpython: exit status: 1
[2023-06-02T09:18:59Z INFO  evm_bench::run] running benchmark erc20.transfer on runner py-evm.pypy...
[2023-06-02T09:18:59Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner py-evm.pypy: exit status: 1
[2023-06-02T09:18:59Z INFO  evm_bench::run] running benchmark erc20.transfer on runner pyrevm...
[2023-06-02T09:19:00Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner pyrevm: exit status: 1
[2023-06-02T09:19:00Z INFO  evm_bench::run] running benchmark erc20.transfer on runner revm...
[2023-06-02T09:19:00Z WARN  evm_bench::run] could not run benchmark erc20.transfer on runner revm: exit status: 101
[2023-06-02T09:19:00Z INFO  evm_bench::run] running benchmark snailtracer on 8 runners...
[2023-06-02T09:19:00Z INFO  evm_bench::run] running benchmark snailtracer on runner akula...
[2023-06-02T09:19:10Z WARN  evm_bench::run] could not run benchmark snailtracer on runner akula: exit status: 101
[2023-06-02T09:19:10Z INFO  evm_bench::run] running benchmark snailtracer on runner ethereumjs...
[2023-06-02T09:19:11Z WARN  evm_bench::run] could not run benchmark snailtracer on runner ethereumjs: exit status: 254
[2023-06-02T09:19:11Z INFO  evm_bench::run] running benchmark snailtracer on runner evmone...
[2023-06-02T09:19:13Z INFO  evm_bench::run] running benchmark snailtracer on runner geth...
[2023-06-02T09:19:13Z WARN  evm_bench::run] could not run benchmark snailtracer on runner geth: exit status: 1
[2023-06-02T09:19:13Z INFO  evm_bench::run] running benchmark snailtracer on runner py-evm.cpython...
[2023-06-02T09:19:13Z WARN  evm_bench::run] could not run benchmark snailtracer on runner py-evm.cpython: exit status: 1
[2023-06-02T09:19:13Z INFO  evm_bench::run] running benchmark snailtracer on runner py-evm.pypy...
[2023-06-02T09:19:14Z WARN  evm_bench::run] could not run benchmark snailtracer on runner py-evm.pypy: exit status: 1
[2023-06-02T09:19:14Z INFO  evm_bench::run] running benchmark snailtracer on runner pyrevm...
[2023-06-02T09:19:14Z WARN  evm_bench::run] could not run benchmark snailtracer on runner pyrevm: exit status: 1
[2023-06-02T09:19:14Z INFO  evm_bench::run] running benchmark snailtracer on runner revm...
[2023-06-02T09:19:14Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on 8 runners...
[2023-06-02T09:19:14Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner akula...
[2023-06-02T09:19:16Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner akula: exit status: 101
[2023-06-02T09:19:16Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner ethereumjs...
[2023-06-02T09:19:17Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner ethereumjs: exit status: 254
[2023-06-02T09:19:17Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner evmone...
[2023-06-02T09:19:19Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner evmone: exit status: 1
[2023-06-02T09:19:19Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner geth...
[2023-06-02T09:19:19Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner geth: exit status: 1
[2023-06-02T09:19:19Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner py-evm.cpython...
[2023-06-02T09:19:20Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner py-evm.cpython: exit status: 1
[2023-06-02T09:19:20Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner py-evm.pypy...
[2023-06-02T09:19:20Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner py-evm.pypy: exit status: 1
[2023-06-02T09:19:20Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner pyrevm...
[2023-06-02T09:19:21Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner pyrevm: exit status: 1
[2023-06-02T09:19:21Z INFO  evm_bench::run] running benchmark ten-thousand-hashes on runner revm...
[2023-06-02T09:19:21Z WARN  evm_bench::run] could not run benchmark ten-thousand-hashes on runner revm: exit status: 101
[2023-06-02T09:19:21Z INFO  evm_bench::results] wrote out results to /home/kenun/ethsema/Artifact/evm-bench/outputs/results/2023-06-02T09:19:21.157890950+00:00.evm-bench.results.json

Write some ERC721 benchmarks

Not sure how valuable this is since we already have ERC20 and EVM-wise they're pretty similar, but maybe worth it anyway?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • Update Rust crate chrono to v0.4.38
  • Update Rust crate clap to v4.5.4
  • Update Rust crate serde to v1.0.201
  • Update Rust crate serde_json to v1.0.117
  • Update dependency @types/node to v18.19.33
  • Update dependency typescript to v5.4.5
  • Update Rust crate bytes to v1.6.0
  • Update Rust crate jsonschema to 0.18.0
  • Update dependency mypy to v1.10.0
  • Update dependency commander to v12
  • Lock file maintenance
  • ๐Ÿ” Create all rate-limited PRs at once ๐Ÿ”

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

cargo
Cargo.toml
  • bytes 1.5.0
  • chrono 0.4.35
  • clap 4.5.3
  • env_logger 0.11.3
  • glob 0.3.1
  • hex 0.4.3
  • jsonschema 0.17.1
  • log 0.4.21
  • serde 1.0.197
  • serde_json 1.0.114
  • tabled 0.14.0
  • users 0.11.0
runners/akula/Cargo.toml
  • bytes 1.5.0
  • clap 4.5.3
  • hex 0.4.3
runners/revm/Cargo.toml
  • bytes 1.5
  • clap 4.5.3
  • hex 0.4
  • primitive-types 0.11
github-actions
.github/workflows/rust.yml
  • actions/checkout v3
gomod
runners/geth/go.mod
  • go 1.19
npm
runners/ethereumjs/package.json
  • @ethereumjs/common 3.2.0
  • @ethereumjs/evm 1.4.0
  • @ethereumjs/util 8.1.0
  • @ethereumjs/vm 6.5.0
  • @types/node 18.19.26
  • commander 10.0.1
  • deno 0.1.1
  • ts-node 10.9.2
  • typescript 5.4.2
  • prettier 2.8.8
pep621
runners/py-evm/pyproject.toml
  • poetry-core >=1.0.0
runners/pyrevm/pyproject.toml
  • poetry-core >=1.0.0
poetry
runners/py-evm/pyproject.toml
  • python ^3.9
  • py-evm ^0.6.0-alpha.1
  • mypy ^1.0.0
  • black ^23.0.0
runners/pyrevm/pyproject.toml
  • python ^3.10
  • py-evm ^0.6.0-alpha.1
  • mypy ^1.0.0
  • black ^23.0.0

  • Check this box to trigger a request for Renovate to run again on this repository

Evmone runner seems to be timing runs in a wrong way when num-runs is bigger than 1

If you run the snailtracer with num-runs over 1, the timing from evmone is increasing on each run:

      "evmone": {
        "run_times": [
          {
            "secs": 0,
            "nanos": 37000000
          },
          {
            "secs": 0,
            "nanos": 67000000
          },
          {
            "secs": 0,
            "nanos": 88000000
          },
          {
            "secs": 0,
            "nanos": 108000000
          },
          {
            "secs": 0,
            "nanos": 136000000
          },
          {
            "secs": 0,
            "nanos": 162000000
          },
          {
            "secs": 0,
            "nanos": 193000000
          },
          {
            "secs": 0,
            "nanos": 218000000
          },
          {
            "secs": 0,
            "nanos": 242000000
          },

I guess the runner has some kind of state, which is kept between runs, which is causing the VM to become slower on each run.

ERC20.transfer Benchmark runs 5000 Transactions ๐Ÿค”

Hi, love this work it's been very insightful and saved me a lot of time!!

I'm just wondering what the motivation for doing the ERC20.transfer benchmark with 5000 transactions is? Does this mean that evmonem for examplem should take 8.6ms / 5000 to process a single ERC20.transfer? I tried running this on my laptop but I'm typically a stats/python guy so I'm not sure I've run it correctly, I got the same time whether I used the ERC20Transfer.sol with the 5000 for loop or without, but I find this very odd ๐Ÿค”

Thanks in advance :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.