Giter Site home page Giter Site logo

antelopeio / leap Goto Github PK

View Code? Open in Web Editor NEW
112.0 17.0 68.0 177.15 MB

C++ implementation of the Antelope protocol

License: Other

CMake 0.90% Shell 0.16% C++ 56.95% C 0.27% Assembly 0.01% WebAssembly 35.32% Objective-C 0.01% Python 5.91% Dockerfile 0.07% HTML 0.41%

leap's Introduction

Leap

  1. Branches
  2. Supported Operating Systems
  3. Binary Installation
  4. Build and Install from Source
  5. Bash Autocomplete

Leap is a C++ implementation of the Antelope protocol. It contains blockchain node software and supporting tools for developers and node operators.

Branches

The main branch is the development branch; do not use it for production. Refer to the release page for current information on releases, pre-releases, and obsolete releases, as well as the corresponding tags for those releases.

Supported Operating Systems

We currently support the following operating systems.

  • Ubuntu 22.04 Jammy
  • Ubuntu 20.04 Focal

Other Unix derivatives such as macOS are tended to on a best-effort basis and may not be full featured. If you aren't using Ubuntu, please visit the "Build Unsupported OS" page to explore your options.

If you are running an unsupported Ubuntu derivative, such as Linux Mint, you can find the version of Ubuntu your distribution was based on by using this command:

cat /etc/upstream-release/lsb-release

Your best bet is to follow the instructions for your Ubuntu base, but we make no guarantees.

Binary Installation

This is the fastest way to get started. From the latest release page, download a binary for one of our supported operating systems, or visit the release tags page to download a binary for a specific version of Leap.

Once you have a *.deb file downloaded for your version of Ubuntu, you can install it as follows:

sudo apt-get update
sudo apt-get install -y ~/Downloads/leap*.deb

Your download path may vary. If you are in an Ubuntu docker container, omit sudo because you run as root by default.

Finally, verify Leap was installed correctly:

nodeos --full-version

You should see a semantic version string followed by a git commit hash with no errors. For example:

v3.1.2-0b64f879e3ebe2e4df09d2e62f1fc164cc1125d1

Build and Install from Source

You can also build and install Leap from source.

Prerequisites

You will need to build on a supported operating system.

Requirements to build:

  • C++20 compiler and standard library
  • CMake 3.16+
  • LLVM 7 - 11 - for Linux only
    • newer versions do not work
  • libcurl 7.40.0+
  • git
  • GMP
  • Python 3
  • python3-numpy
  • zlib

Step 1 - Clone

If you don't have the Leap repo cloned to your computer yet, open a terminal and navigate to the folder where you want to clone the Leap repository:

cd ~/Downloads

Clone Leap using either HTTPS...

git clone --recursive https://github.com/AntelopeIO/leap.git

...or SSH:

git clone --recursive [email protected]:AntelopeIO/leap.git

ℹ️ HTTPS vs. SSH Clone ℹ️
Both an HTTPS or SSH git clone will yield the same result - a folder named leap containing our source code. It doesn't matter which type you use.

Navigate into that folder:

cd leap

Step 2 - Checkout Release Tag or Branch

Choose which release or branch you would like to build, then check it out. If you are not sure, use the latest release. For example, if you want to build release 3.1.2 then you would check it out using its tag, v3.1.2. In the example below, replace v0.0.0 with your selected release tag accordingly:

git fetch --all --tags
git checkout v0.0.0

Once you are on the branch or release tag you want to build, make sure everything is up-to-date:

git pull
git submodule update --init --recursive

Step 3 - Build

Select build instructions below for a pinned build (preferred) or an unpinned build.

ℹ️ Pinned vs. Unpinned Build ℹ️
We have two types of builds for Leap: "pinned" and "unpinned." A pinned build is a reproducible build with the build environment and dependency versions fixed by the development team. In contrast, unpinned builds use the dependency versions provided by the build platform. Unpinned builds tend to be quicker because the pinned build environment must be built from scratch. Pinned builds, in addition to being reproducible, ensure the compiler remains the same between builds of different Leap major versions. Leap requires the compiler version to remain the same, otherwise its state might need to be recovered from a portable snapshot or the chain needs to be replayed.

⚠️ A Warning On Parallel Compilation Jobs (-j flag) ⚠️
When building C/C++ software, often the build is performed in parallel via a command such as make -j "$(nproc)" which uses all available CPU threads. However, be aware that some compilation units (*.cpp files) in Leap will consume nearly 4GB of memory. Failures due to memory exhaustion will typically, but not always, manifest as compiler crashes. Using all available CPU threads may also prevent you from doing other things on your computer during compilation. For these reasons, consider reducing this value.

🐋 Docker and sudo 🐋
If you are in an Ubuntu docker container, omit sudo from all commands because you run as root by default. Most other docker containers also exclude sudo, especially Debian-family containers. If your shell prompt is a hash tag (#), omit sudo.

Pinned Reproducible Build

The pinned reproducible build requires Docker. Make sure you are in the root of the leap repo and then run

DOCKER_BUILDKIT=1 docker build -f tools/reproducible.Dockerfile -o . .

This command will take a substantial amount of time because a toolchain is built from scratch. Upon completion, the current directory will contain a built .deb and .tar.gz (you can change the -o . argument to place the output in a different directory). If needing to reduce the number of parallel jobs as warned above, run the command as,

DOCKER_BUILDKIT=1 docker build --build-arg LEAP_BUILD_JOBS=4 -f tools/reproducible.Dockerfile -o . .

Unpinned Build

The following instructions are valid for this branch. Other release branches may have different requirements, so ensure you follow the directions in the branch or release you intend to build. If you are in an Ubuntu docker container, omit sudo because you run as root by default.

Install dependencies:

sudo apt-get update
sudo apt-get install -y \
        build-essential \
        cmake \
        git \
        libcurl4-openssl-dev \
        libgmp-dev \
        llvm-11-dev \
        python3-numpy \
        file \
        zlib1g-dev

On Ubuntu 20.04, install gcc-10 which has C++20 support:

sudo apt-get install -y g++-10

To build, make sure you are in the root of the leap repo, then run the following command:

mkdir -p build
cd build

## on Ubuntu 20, specify the gcc-10 compiler
cmake -DCMAKE_C_COMPILER=gcc-10 -DCMAKE_CXX_COMPILER=g++-10 -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=/usr/lib/llvm-11 ..

## on Ubuntu 22, the default gcc version is 11, using the default compiler is fine
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH=/usr/lib/llvm-11 ..

make -j "$(nproc)" package

Now you can optionally test your build, or install the *.deb binary packages, which will be in the root of your build directory.

Step 4 - Test

Leap supports the following test suites:

Test Suite Test Type Test Size Notes
Parallelizable tests Unit tests Small
WASM spec tests Unit tests Small Unit tests for our WASM runtime, each short but very CPU-intensive
Serial tests Component/Integration Medium
Long-running tests Integration Medium-to-Large Tests which take an extraordinarily long amount of time to run

When building from source, we recommended running at least the parallelizable tests.

Parallelizable Tests

This test suite consists of any test that does not require shared resources, such as file descriptors, specific folders, or ports, and can therefore be run concurrently in different threads without side effects (hence, easily parallelized). These are mostly unit tests and small tests which complete in a short amount of time.

You can invoke them by running ctest from a terminal in your Leap build directory and specifying the following arguments:

ctest -j "$(nproc)" -LE _tests

WASM Spec Tests

The WASM spec tests verify that our WASM execution engine is compliant with the web assembly standard. These are very small, very fast unit tests. However, there are over a thousand of them so the suite can take a little time to run. These tests are extremely CPU-intensive.

You can invoke them by running ctest from a terminal in your Leap build directory and specifying the following arguments:

ctest -j "$(nproc)" -L wasm_spec_tests

We have observed severe performance issues when multiple virtual machines are running this test suite on the same physical host at the same time, for example in a CICD system. This can be resolved by disabling hyperthreading on the host.

Serial Tests

The serial test suite consists of medium component or integration tests that use specific paths, ports, rely on process names, or similar, and cannot be run concurrently with other tests. Serial tests can be sensitive to other software running on the same host and they may SIGKILL other nodeos processes. These tests take a moderate amount of time to complete, but we recommend running them.

You can invoke them by running ctest from a terminal in your Leap build directory and specifying the following arguments:

ctest -L "nonparallelizable_tests"

Long-Running Tests

The long-running tests are medium-to-large integration tests that rely on shared resources and take a very long time to run.

You can invoke them by running ctest from a terminal in your Leap build directory and specifying the following arguments:

ctest -L "long_running_tests"

Step 5 - Install

Once you have built Leap and tested your build, you can install Leap on your system. Don't forget to omit sudo if you are running in a docker container.

We recommend installing the binary package you just built. Navigate to your Leap build directory in a terminal and run this command:

sudo apt-get update
sudo apt-get install -y ./leap[-_][0-9]*.deb

It is also possible to install using make instead:

sudo make install

Bash Autocomplete

cleos and leap-util offer a substantial amount of functionality. Consider using bash's autocompletion support which makes it easier to discover all their various options.

For our provided .deb packages simply install Ubuntu's bash-completion package: apt-get install bash-completion (you may need to log out/in after installing).

If building from source install the build/programs/cleos/bash-completion/completions/cleos and build/programs/leap-util/bash-completion/completions/leap-util files to your bash-completion directory. Refer to bash-completion's documentation on the possible install locations.

leap's People

Contributors

arhag avatar asiniscalchi avatar b1bart avatar brianjohnson5972 avatar bytemaster avatar cj-oci avatar claytoncalabrese avatar dskvr avatar elmato avatar greg7mdp avatar heifner avatar huangminghuang avatar jgiszczak avatar kj4ezj avatar larryk85 avatar linh2931 avatar lparisc avatar moskvanaft avatar nathanielhourt avatar ndcgundlach avatar norsegaud avatar oschwaldp-oci avatar pmesnier avatar scottarnette avatar spoonincode avatar taokayan avatar tbfleming avatar vladtr avatar wanderingbort avatar zorba80 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

leap's Issues

Hot-Reload of specific configuration-parameters

While Node-Operators on WAX recently saw issues in block-production many had to restart nodeos-processes frequently to update specific configuration-parameters like subjective-billing, cpu-efforts, greylists/blacklists/whitelists etc.

A common approach is to
1.) Change the configuration (config.ini)
2.) Kill the nodeos-process after successfully producing a round
3.) Restart immediately to have nodeos running and catched up again when the next round needs to be produced on this producer-node

This approach seems to be unnecessarily complex and error-prone, a possible improvement could be the implementation of a hot-reload on config-change for parameters that can be changed at runtime as an alternative to additional RPC-API-functionallity which always comes with drawbacks like accidentially exposing endpoints to the public.

get_transaction_status not responding

root@iZt4n2zp3wr6kkpibstlf4Z:/coins/eos/main# cleos -u http://127.0.0.1:6666 get transaction-status 2FA3EB63C1E2626F0A5B3238E137830EC088A2F19D14AE0B12BB5F3ED3011493

this is my configuration

#    producer-name = !!_YOUR_PRODUCER_NAME_!!!
#    signature-provider = YOUR_PUB_KEY_HERE=KEY:YOUR_PRIV_KEY_HERE

    http-server-address = 0.0.0.0:6666
    p2p-listen-endpoint = 0.0.0.0:9876
    p2p-server-address = 172.31.88.66:9876
    
    chain-state-db-size-mb = 26384
    #reversible-blocks-db-size-mb = 1024
    
    contracts-console = true
    
    p2p-max-nodes-per-host = 100
        
    chain-threads = 8
    http-threads = 6

    # eosio2.0
    http-max-response-time-ms = 100

    #wasm-runtime = wabt
    #Only!! for performance eosio 2.0+
    eos-vm-oc-compile-threads = 4
    eos-vm-oc-enable = 0
    wasm-runtime = eos-vm-jit
    #END
    http-validate-host = false
    verbose-http-errors = true
    abi-serializer-max-time-ms = 2000
    enable-account-queries = true #only for API node
    #option from eosio 2.0.7
    max-nonprivileged-inline-action-size = 4096
    #produce-time-offset-us = 250000
    last-block-time-offset-us = -300000
            

    # Safely shut down node when free space
    chain-state-db-guard-size-mb = 128
    #reversible-blocks-db-guard-size-mb = 2


    access-control-allow-origin = *
    access-control-allow-headers = Origin, X-Requested-With, Content-Type, Accept
 
    allowed-connection = any
    

   
    max-clients = 150
    connection-cleanup-period = 30
    #network-version-match = 0
    sync-fetch-span = 2000
    enable-stale-production = false

    
    pause-on-startup = false
    max-irreversible-block-age = -1
    txn-reference-block-lag = 0
    

    plugin = eosio::http_plugin
    plugin = eosio::producer_plugin
    #plugin = eosio::producer_api_plugin
    plugin = eosio::chain_plugin
    plugin = eosio::chain_api_plugin
    plugin = eosio::trace_api_plugin
    transaction-finality-status-max-storage-size-gb = 5
    
    trace-no-abis = true


p2p-peer-address = eos.seed.eosnation.io:9876
p2p-peer-address = peer1.eosphere.io:9876
p2p-peer-address = peer2.eosphere.io:9876
p2p-peer-address = p2p.genereos.io:9876

Compile warnings in deep_mind.cpp, abi_serializer.cpp, producer_plugin.cpp, auth_tests.cpp

Minor warnings:

libraries/chain/deep_mind.cpp:46:4: warning: control reaches end of non-void function [-Wreturn-type]

libraries/chain/abi_serializer.cpp:80:42: warning: ‘void eosio::chain::abi_serializer::set_abi(const eosio::chain::abi_def&, const fc::microseconds&)’ is deprecated: use the overload with yield_function_t[=create_yield_function(max_serialization_time)] [-Wdeprecated-declarations]

plugins/producer_plugin/producer_plugin.cpp:1959:43: warning: comparison of integer expressions of different signedness: ‘uint64_t’ {aka ‘long unsigned int’} and ‘int64_t’ {aka  ‘long int’} [-Wsign-compare]
 1959 |                if( prev_billed_plus100_us < max_trx_time.count() ) max_trx_time = fc::microseconds( prev_billed_plus100_us );

unittests/auth_tests.cpp:235:18: warning: variable ‘new_owner_priv_key’ set but not used [-Wunused-but-set-variable]

nodeos should check disk space immediately on startup instead of first loading all the blockchain data

Nodeos stops with following error:

 nodeos    file_space_handler.hpp:112    add_file_system      ] /var/lib/nodeos/blocks's file system monitored. shutdown_available: 5259198460, capacity: 52591984640, threshold: 90
resmon    file_space_handler.hpp:62     is_threshold_exceede ] Space usage warning: /var/lib/nodeos/blocks's file system exceeded threshold 90%, available: 5215444992, Capacity: 52591984640, shutdown_available: 5259198460
[etc...]
info  2022-08-26T17:02:35.539 nodeos    main.cpp:181                  main                 ] nodeos successfully exiting

Then if you start nodeos again, it goes all through the process of loading all the blockchain state (which takes a while with database-map-mode = heap). Once that it is done, then it again exits.

When you have nodeos running under some sort of thing that just restarts it when it exits (eg systemd), then this just burns CPU and Disk I/O.

Two improvements if not sufficient disk space:

  • hard fail (exit code non-zero)
    • possibly exit 0 on initial shutdown, and then exit non-zero if starting with not sufficient disk space
  • exit as quickly as possible on startup (without loading all the blockchain data)

Evaluate existing plugin initialize and startup steps

Today we believe that plugins should follow the following phases

  • Initialize - command line, config, validation, and initial processing
  • Startup - everything else

We'd like to evaluate existing plugins to identify edge cases and confirm that we are or aren't broadly adopting this pattern. From there, we can submit an official proposal to adhere to for new plugins.

proposing MSIG throws "Failed with error: Bad Cast"

Issue

When proposing an MSIG using cleos or eosc when the transaction includes "data": {} objects, it fails proposing an MSIG.

Versions

  • querying endpoint Nodeos version: v3.1.0
  • local cleos version: v2.1.0
  • eosc version: v1.4.0

Replicate issue by

1. Create transaction

$ cleos -u https://kylin.api.eosnation.io transfer eosio eosio.null "1.0000 EOS" "propose as MSIG" -s -d --json-file transfer.json --expiration 5000000

transfer.json transaction

{
  "expiration": "2022-11-05T08:26:34",
  "ref_block_num": 61319,
  "ref_block_prefix": 2545640930,
  "max_net_usage_words": 0,
  "max_cpu_usage_ms": 0,
  "delay_sec": 0,
  "context_free_actions": [],
  "actions": [{
      "account": "eosio.token",
      "name": "transfer",
      "authorization": [{
          "actor": "eosio",
          "permission": "active"
        }
      ],
      "data": {
        "from": "eosio",
        "to": "eosio.null",
        "quantity": "1.0000 EOS",
        "memo": "propose as MSIG"
      },
      "hex_data": "0000000000ea305500408c7a02ea3055102700000000000004454f53000000000f70726f706f7365206173204d534947"
    }
  ],
  "signatures": [],
  "context_free_data": []
}

2. Propose as MSIG

with cleos

$ cleos -u https://kylin.api.eosnation.io multisig propose_trx transfer '[{"actor":"eosio","permission":"active"}]' transfer.json <account>
error 2022-09-08T11:34:45.230 thread-0  main.cpp:4371                 operator()           ] Failed with error: Bad Cast (7)
Invalid cast from type 'object_type' to string

with eosc

$ eosc -u https://kylin.api.eosnation.io multisig propose <account> transfer transfer.json --request eosio    
Enter passphrase to decrypt your vault: 
ERROR: signing transaction: get_required_keys: json: error calling MarshalJSON for type *eos.Action: Encode: unsupported type map[string]interface {}

Manually resolving by removing data

{
  "expiration": "2022-11-05T08:26:34",
  "ref_block_num": 61319,
  "ref_block_prefix": 2545640930,
  "max_net_usage_words": 0,
  "max_cpu_usage_ms": 0,
  "delay_sec": 0,
  "context_free_actions": [],
  "actions": [{
      "account": "eosio.token",
      "name": "transfer",
      "authorization": [{
          "actor": "eosio",
          "permission": "active"
        }
      ],
      "hex_data": "0000000000ea305500408c7a02ea3055102700000000000004454f53000000000f70726f706f7365206173204d534947"
    }
  ],
  "signatures": [],
  "context_free_data": []
}
  • ✅ works with eosc
  • ❌ still throws error with cleos

Error 3015014: Pack data exception
Error Details:
Missing field 'data' in input object while processing struct 'propose.trx.actions[0]'

3.1.0: if p2p socket fails to bind, nodeos crashes leaving DB dirty

In the background do a

nc -l -p 9876

to tie up the port.
Then just

./nodeos
...
info  2022-08-26T18:37:07.548 nodeos    resource_monitor_plugi:94     plugin_startup       ] Creating and starting monitor thread
info  2022-08-26T18:37:07.549 nodeos    file_space_handler.hpp:112    add_file_system      ] /root/.local/share/eosio/nodeos/data/blocks's file system monitored. shutdown_available: 50010786200, capacity: 500107862016, threshold: 90
warn  2022-08-26T18:37:07.549 resmon    file_space_handler.hpp:66     is_threshold_exceede ] Space usage warning: /root/.local/share/eosio/nodeos/data/blocks's file system approaching threshold. available: 74965753856, warning_available: 75016179300
warn  2022-08-26T18:37:07.549 resmon    file_space_handler.hpp:68     is_threshold_exceede ] nodeos will shutdown when space usage exceeds threshold 90%
error 2022-08-26T18:37:07.549 nodeos    net_plugin.cpp:3746           operator()           ] net_plugin::plugin_startup failed to bind to port 9876
error 2022-08-26T18:37:07.549 nodeos    main.cpp:174                  main                 ] std::exception
terminate called without an active exception
Aborted (core dumped)

This crash occurs early enough that the DB is left dirty.

v3.1.0-50894acec3991dc108516089be52c47d61eabf88

(this is 3.1.0 post CI branch protection modification)

chain api not working

I try to get the node height and it returns a response like this:

curl -s -X POST http://127.0.0.1:8888/v1/chain/get_info

{
  "code": 429,
  "message": "Busy",
  "error": {
    "code": 429,
    "name": "Busy",
    "what": "Too many bytes in flight: 897",
    "details": []
  }
}

log output:

info  2022-08-25T00:59:03.818 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block a715bdc2807485f5... #264435005 @ 2022-08-25T00:59:04.000 signed by eosrapidprod [trxs: 5, lib: 264434672, conf: 0, latency: -181 ms]
info  2022-08-25T00:59:04.316 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block 67fdc4682faad5f6... #264435006 @ 2022-08-25T00:59:04.500 signed by eosrapidprod [trxs: 5, lib: 264434672, conf: 0, latency: -183 ms]
info  2022-08-25T00:59:04.819 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block d562a7213f699348... #264435007 @ 2022-08-25T00:59:05.000 signed by eosrapidprod [trxs: 5, lib: 264434672, conf: 0, latency: -180 ms]
info  2022-08-25T00:59:05.216 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block 56313fe9ce7a0868... #264435008 @ 2022-08-25T00:59:05.500 signed by eosrapidprod [trxs: 3, lib: 264434672, conf: 0, latency: -283 ms]
info  2022-08-25T00:59:05.852 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block 8b219bf51b26089c... #264435009 @ 2022-08-25T00:59:06.000 signed by hashfineosio [trxs: 4, lib: 264434684, conf: 240, latency: -147 ms]
info  2022-08-25T00:59:05.937 net-1     net_plugin.cpp:2943           handle_message       ] ["eosn-eos-seed22a:9876 - 402b976" - 2 216.66.68.24:9876] received time_message
info  2022-08-25T00:59:06.634 net-1     net_plugin.cpp:2943           handle_message       ] ["p2p.eosflare.io:9876 - b158156" - 5 108.171.210.180:9876] received time_message
info  2022-08-25T00:59:06.778 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block e4c5e56354d3eead... #264435010 @ 2022-08-25T00:59:06.500 signed by hashfineosio [trxs: 2, lib: 264434684, conf: 0, latency: 278 ms]
info  2022-08-25T00:59:06.973 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block ac31b20e4b81560a... #264435011 @ 2022-08-25T00:59:07.000 signed by hashfineosio [trxs: 5, lib: 264434684, conf: 0, latency: -26 ms]
info  2022-08-25T00:59:07.357 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block 64f1bb87009fffee... #264435012 @ 2022-08-25T00:59:07.500 signed by hashfineosio [trxs: 3, lib: 264434684, conf: 0, latency: -142 ms]
info  2022-08-25T00:59:07.843 nodeos    producer_plugin.cpp:504       on_incoming_block    ] Received block e1f1cbaecb938fbc... #264435013 @ 2022-08-25T00:59:08.000 signed by hashfineosio [trxs: 3, lib: 264434684, conf: 0, latency: -156 ms]

OS Version:Docker Ubuntu 20.04
Node Version: v3.1.0

I started from a snapshot load

/opt/eosmain/core/nodeos --data-dir=/mnt/eosmain/node --config-dir=/mnt/eosmain/conf --snapshot=/mnt/eosmain/snapshot/snapshot-v6-latest.bin --disable-replay-opts

my config file

# the location of the blocks directory (absolute path or relative to application data dir) (eosio::chain_plugin)
blocks-dir = "/mnt/eosmain/node"

# the location of the protocol_features directory (absolute path or relative to application config dir) (eosio::chain_plugin)
# protocol-features-dir = "protocol_features"

# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints. (eosio::chain_plugin)
# checkpoint =

# Override default WASM runtime ( "eos-vm-jit", "eos-vm")
# "eos-vm-jit" : A WebAssembly runtime that compiles WebAssembly code to native x86 code prior to execution.
# "eos-vm" : A WebAssembly interpreter.
#  (eosio::chain_plugin)
wasm-runtime = eos-vm-jit

# The name of an account whose code will be profiled (eosio::chain_plugin)
# profile-account =

# Override default maximum ABI serialization time allowed in ms (eosio::chain_plugin)
abi-serializer-max-time-ms = 60000

# Maximum size (in MiB) of the chain state database (eosio::chain_plugin)
chain-state-db-size-mb = 40960

# Safely shut down node when free space remaining in the chain state database drops below this size (in MiB). (eosio::chain_plugin)
chain-state-db-guard-size-mb = 128

# Percentage of actual signature recovery cpu to bill. Whole number percentages, e.g. 50 for 50% (eosio::chain_plugin)
# signature-cpu-billable-pct = 50

# Number of worker threads in controller thread pool (eosio::chain_plugin)
# chain-threads = 2

# print contract's output to console (eosio::chain_plugin)
contracts-console = true

# print deeper information about chain operations (eosio::chain_plugin)
# deep-mind = false

# Account added to actor whitelist (may specify multiple times) (eosio::chain_plugin)
# actor-whitelist =

# Account added to actor blacklist (may specify multiple times) (eosio::chain_plugin)
# actor-blacklist =

# Contract account added to contract whitelist (may specify multiple times) (eosio::chain_plugin)
# contract-whitelist =

# Contract account added to contract blacklist (may specify multiple times) (eosio::chain_plugin)
# contract-blacklist =

# Action (in the form code::action) added to action blacklist (may specify multiple times) (eosio::chain_plugin)
# action-blacklist =

# Public key added to blacklist of keys that should not be included in authorities (may specify multiple times) (eosio::chain_plugin)
# key-blacklist =

# Deferred transactions sent by accounts in this list do not have any of the subjective whitelist/blacklist checks applied to them (may specify multiple times) (eosio::chain_plugin)
# sender-bypass-whiteblacklist =

# Database read mode ("speculative", "head", "read-only", "irreversible").
# In "speculative" mode: database contains state changes by transactions in the blockchain up to the head block as well as some transactions not yet included in the blockchain.
# In "head" mode: database contains state changes by only transactions in the blockchain up to the head block; transactions received by the node are relayed if valid.
# In "read-only" mode: (DEPRECATED: see p2p-accept-transactions & api-accept-transactions) database contains state changes by only transactions in the blockchain up to the head block; transactions received via the P2P network are not relayed and transactions cannot be pushed via the chain API.
# In "irreversible" mode: database contains state changes by only transactions in the blockchain up to the last irreversible block; transactions received via the P2P network are not relayed and transactions cannot be pushed via the chain API.
#  (eosio::chain_plugin)
# read-mode = speculative

# Allow API transactions to be evaluated and relayed if valid. (eosio::chain_plugin)
# api-accept-transactions = true

# Chain validation mode ("full" or "light").
# In "full" mode all incoming blocks will be fully validated.
# In "light" mode all incoming blocks headers will be fully validated; transactions in those validated blocks will be trusted
#  (eosio::chain_plugin)
# validation-mode = full

# Disable the check which subjectively fails a transaction if a contract bills more RAM to another account within the context of a notification handler (i.e. when the receiver is not the code of the action). (eosio::chain_plugin)
# disable-ram-billing-notify-checks = false

# Subjectively limit the maximum length of variable components in a variable legnth signature to this size in bytes (eosio::chain_plugin)
# maximum-variable-signature-length = 16384

# Indicate a producer whose blocks headers signed by it will be fully validated, but transactions in those validated blocks will be trusted. (eosio::chain_plugin)
# trusted-producer =

# Database map mode ("mapped", "heap", or "locked").
# In "mapped" mode database is memory mapped as a file.
# In "heap" mode database is preloaded in to swappable memory and will use huge pages if available.
# In "locked" mode database is preloaded, locked in to memory, and will use huge pages if available.
#  (eosio::chain_plugin)
# database-map-mode = mapped

# Maximum size (in MiB) of the EOS VM OC code cache (eosio::chain_plugin)
# eos-vm-oc-cache-size-mb = 1024

# Number of threads to use for EOS VM OC tier-up (eosio::chain_plugin)
# eos-vm-oc-compile-threads = 1

# Enable EOS VM OC tier-up runtime (eosio::chain_plugin)
eos-vm-oc-enable = true

# enable queries to find accounts by various metadata. (eosio::chain_plugin)
# enable-account-queries = false

# maximum allowed size (in bytes) of an inline action for a nonprivileged account (eosio::chain_plugin)
# max-nonprivileged-inline-action-size = 4096

# Maximum size (in GiB) allowed to be allocated for the Transaction Retry feature. Setting above 0 enables this feature. (eosio::chain_plugin)
# transaction-retry-max-storage-size-gb =

# How often, in seconds, to resend an incoming transaction to network if not seen in a block. (eosio::chain_plugin)
# transaction-retry-interval-sec = 20

# Maximum allowed transaction expiration for retry transactions, will retry transactions up to this value. (eosio::chain_plugin)
# transaction-retry-max-expiration-sec = 120

# Maximum size (in GiB) allowed to be allocated for the Transaction Finality Status feature. Setting above 0 enables this feature. (eosio::chain_plugin)
# transaction-finality-status-max-storage-size-gb =

# Duration (in seconds) a successful transaction's Finality Status will remain available from being first identified. (eosio::chain_plugin)
# transaction-finality-status-success-duration-sec = 180

# Duration (in seconds) a failed transaction's Finality Status will remain available from being first identified. (eosio::chain_plugin)
# transaction-finality-status-failure-duration-sec = 180

# if set, periodically prune the block log to store only configured number of most recent blocks (eosio::chain_plugin)
block-log-retain-blocks = 10240

# PEM encoded trusted root certificate (or path to file containing one) used to validate any TLS connections made.  (may specify multiple times)
#  (eosio::http_client_plugin)
# https-client-root-cert =

# true: validate that the peer certificates are valid and trusted, false: ignore cert errors (eosio::http_client_plugin)
# https-client-validate-peers = true

# The filename (relative to data-dir) to create a unix socket for HTTP RPC; set blank to disable. (eosio::http_plugin)
# unix-socket-path =

# The local IP and port to listen for incoming http connections; set blank to disable. (eosio::http_plugin)
http-server-address = 0.0.0.0:8888

# The local IP and port to listen for incoming https connections; leave blank to disable. (eosio::http_plugin)
# https-server-address =

# Filename with the certificate chain to present on https connections. PEM format. Required for https. (eosio::http_plugin)
# https-certificate-chain-file =

# Filename with https private key in PEM format. Required for https (eosio::http_plugin)
# https-private-key-file =

# Configure https ECDH curve to use: secp384r1 or prime256v1 (eosio::http_plugin)
# https-ecdh-curve = secp384r1

# Specify the Access-Control-Allow-Origin to be returned on each request. (eosio::http_plugin)
access-control-allow-origin = *

# Specify the Access-Control-Allow-Headers to be returned on each request. (eosio::http_plugin)
# access-control-allow-headers =

# Specify the Access-Control-Max-Age to be returned on each request. (eosio::http_plugin)
# access-control-max-age =

# Specify if Access-Control-Allow-Credentials: true should be returned on each request. (eosio::http_plugin)
# access-control-allow-credentials = false

# The maximum body size in bytes allowed for incoming RPC requests (eosio::http_plugin)
max-body-size = 8388608

# Maximum size in megabytes http_plugin should use for processing http requests. 503 error response when exceeded. (eosio::http_plugin)
http-max-bytes-in-flight-mb = 4096

# Maximum time for processing a request. (eosio::http_plugin)
http-max-response-time-ms = 100000

# Append the error log to HTTP responses (eosio::http_plugin)
verbose-http-errors = true

# If set to false, then any incoming "Host" header is considered valid (eosio::http_plugin)
http-validate-host = false

# Additionaly acceptable values for the "Host" header of incoming HTTP requests, can be specified multiple times.  Includes http/s_server_address by default. (eosio::http_plugin)
# http-alias =

# Number of worker threads in http thread pool (eosio::http_plugin)
http-threads = 4096

# The maximum number of pending login requests (eosio::login_plugin)
# max-login-requests = 1000000

# The maximum timeout for pending login requests (in seconds) (eosio::login_plugin)
# max-login-timeout = 60

# The actual host:port used to listen for incoming p2p connections. (eosio::net_plugin)
p2p-listen-endpoint = 0.0.0.0:9876

# An externally accessible host:port for identifying this node. Defaults to p2p-listen-endpoint. (eosio::net_plugin)
# p2p-server-address =

# The public endpoint of a peer node to connect to. Use multiple p2p-peer-address options as needed to compose a network.
#   Syntax: host:port[:<trx>|<blk>]
#   The optional 'trx' and 'blk' indicates to node that only transactions 'trx' or blocks 'blk' should be sent.  Examples:
#     p2p.eos.io:9876
#     p2p.trx.eos.io:9876:trx
#     p2p.blk.eos.io:9876:blk
#  (eosio::net_plugin)
# https://mainnet.eosio.online/endpoints,Updated at: August 22, 2022
p2p-peer-address = eos.seed.eosnation.io:9876
p2p-peer-address = eos.edenia.cloud:9876
p2p-peer-address = p2p.eossweden.org:9876
p2p-peer-address = p2p.eosflare.io:9876
p2p-peer-address = peer.main.alohaeos.com:9876
p2p-peer-address = seed.greymass.com:9876
p2p-peer-address = p2p-eos.whaleex.com:9876
p2p-peer-address = peer.eosio.sg:9876
p2p-peer-address = p2p.genereos.io:9876
p2p-peer-address = p2p.eos.detroitledger.tech:1337

# Maximum number of client nodes from any single IP address (eosio::net_plugin)
# p2p-max-nodes-per-host = 1

# Allow transactions received over p2p network to be evaluated and relayed if valid. (eosio::net_plugin)
# p2p-accept-transactions = true

# The name supplied to identify this node amongst the peers. (eosio::net_plugin)
agent-name = "NodeHub-EOS"

# Can be 'any' or 'producers' or 'specified' or 'none'. If 'specified', peer-key must be specified at least once. If only 'producers', peer-key is not required. 'producers' and 'specified' may be combined. (eosio::net_plugin)
allowed-connection = any

# Optional public key of peer allowed to connect.  May be used multiple times. (eosio::net_plugin)
# peer-key =

# Tuple of [PublicKey, WIF private key] (may specify multiple times) (eosio::net_plugin)
# peer-private-key =

# Maximum number of clients from which connections are accepted, use 0 for no limit (eosio::net_plugin)
max-clients = 100

# number of seconds to wait before cleaning up dead connections (eosio::net_plugin)
connection-cleanup-period = 60

# max connection cleanup time per cleanup call in milliseconds (eosio::net_plugin)
# max-cleanup-time-msec = 10

# Maximum time to track transaction for duplicate optimization (eosio::net_plugin)
# p2p-dedup-cache-expire-time-sec = 10

# Number of worker threads in net_plugin thread pool (eosio::net_plugin)
# net-threads = 2

# number of blocks to retrieve in a chunk from any individual peer during synchronization (eosio::net_plugin)
sync-fetch-span = 500

# Enable experimental socket read watermark optimization (eosio::net_plugin)
# use-socket-read-watermark = false

# The string used to format peers when logging messages about them.  Variables are escaped with ${<variable name>}.
# Available Variables:
#    _name  	self-reported name
#
#    _cid   	assigned connection id
#
#    _id    	self-reported ID (64 hex characters)
#
#    _sid   	first 8 characters of _peer.id
#
#    _ip    	remote IP address of peer
#
#    _port  	remote port number of peer
#
#    _lip   	local IP address connected to peer
#
#    _lport 	local port number connected to peer
#
#  (eosio::net_plugin)
# peer-log-format = ["${_name}" - ${_cid} ${_ip}:${_port}]

# peer heartbeat keepalive message interval in milliseconds (eosio::net_plugin)
# p2p-keepalive-interval-ms = 10000

# Enable block production, even if the chain is stale. (eosio::producer_plugin)
enable-stale-production = false

# Start this node in a state where production is paused (eosio::producer_plugin)
# pause-on-startup = false

# Limits the maximum time (in milliseconds) that is allowed a pushed transaction's code to execute before being considered invalid (eosio::producer_plugin)
max-transaction-time = 60000

# Limits the maximum age (in seconds) of the DPOS Irreversible Block for a chain this node will produce blocks on (use negative value to indicate unlimited) (eosio::producer_plugin)
max-irreversible-block-age = -1

# ID of producer controlled by this node (e.g. inita; may specify multiple times) (eosio::producer_plugin)
# producer-name =

# (DEPRECATED - Use signature-provider instead) Tuple of [public key, WIF private key] (may specify multiple times) (eosio::producer_plugin)
# private-key =

# Key=Value pairs in the form <public-key>=<provider-spec>
# Where:
#    <public-key>    	is a string form of a vaild EOSIO public key
#
#    <provider-spec> 	is a string in the form <provider-type>:<data>
#
#    <provider-type> 	is KEY, or KEOSD
#
#    KEY:<data>      	is a string form of a valid EOSIO private key which maps to the provided public key
#
#    KEOSD:<data>    	is the URL where keosd is available and the approptiate wallet(s) are unlocked (eosio::producer_plugin)
# signature-provider = EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3

# Limits the maximum time (in milliseconds) that is allowed for sending blocks to a keosd provider for signing (eosio::producer_plugin)
# keosd-provider-timeout = 5

# account that can not access to extended CPU/NET virtual resources (eosio::producer_plugin)
greylist-account = blocktwitter
greylist-account = chaintwitter
greylist-account = eidosonecoin

# Limit (between 1 and 1000) on the multiple that CPU/NET virtual resources can extend during low usage (only enforced subjectively; use 1000 to not enforce any limit) (eosio::producer_plugin)
# greylist-limit = 1000

# Offset of non last block producing time in microseconds. Valid range 0 .. -block_time_interval. (eosio::producer_plugin)
# produce-time-offset-us = 0

# Offset of last block producing time in microseconds. Valid range 0 .. -block_time_interval. (eosio::producer_plugin)
# last-block-time-offset-us = -200000

# Percentage of cpu block production time used to produce block. Whole number percentages, e.g. 80 for 80% (eosio::producer_plugin)
# cpu-effort-percent = 80

# Percentage of cpu block production time used to produce last block. Whole number percentages, e.g. 80 for 80% (eosio::producer_plugin)
# last-block-cpu-effort-percent = 80

# Threshold of CPU block production to consider block full; when within threshold of max-block-cpu-usage block can be produced immediately (eosio::producer_plugin)
# max-block-cpu-usage-threshold-us = 5000

# Threshold of NET block production to consider block full; when within threshold of max-block-net-usage block can be produced immediately (eosio::producer_plugin)
# max-block-net-usage-threshold-bytes = 1024

# Maximum wall-clock time, in milliseconds, spent retiring scheduled transactions in any block before returning to normal transaction processing. (eosio::producer_plugin)
# max-scheduled-transaction-time-per-block-ms = 100

# Time in microseconds allowed for a transaction that starts with insufficient CPU quota to complete and cover its CPU usage. (eosio::producer_plugin)
# subjective-cpu-leeway-us = 31000

# Sets the maximum amount of failures that are allowed for a given account per block. (eosio::producer_plugin)
# subjective-account-max-failures = 3

# Sets the time to return full subjective cpu for accounts (eosio::producer_plugin)
# subjective-account-decay-time-minutes = 1440

# ratio between incoming transactions and deferred transactions when both are queued for execution (eosio::producer_plugin)
# incoming-defer-ratio = 1

# Maximum size (in MiB) of the incoming transaction queue. Exceeding this value will subjectively drop transaction with resource exhaustion. (eosio::producer_plugin)
# incoming-transaction-queue-size-mb = 1024

# Disable the re-apply of API transactions. (eosio::producer_plugin)
# disable-api-persisted-trx = false

# Disable subjective CPU billing for API/P2P transactions (eosio::producer_plugin)
# disable-subjective-billing = true

# Account which is excluded from subjective CPU billing (eosio::producer_plugin)
# disable-subjective-account-billing =

# Disable subjective CPU billing for P2P transactions (eosio::producer_plugin)
# disable-subjective-p2p-billing = true

# Disable subjective CPU billing for API transactions (eosio::producer_plugin)
# disable-subjective-api-billing = true

# Number of worker threads in producer thread pool (eosio::producer_plugin)
# producer-threads = 2

# the location of the snapshots directory (absolute path or relative to application data dir) (eosio::producer_plugin)
# snapshots-dir = "snapshots"

# Time in seconds between two consecutive checks of resource usage. Should be between 1 and 300 (eosio::resource_monitor_plugin)
# resource-monitor-interval-seconds = 2

# Threshold in terms of percentage of used space vs total space. If used space is above (threshold - 5%), a warning is generated.  Unless resource-monitor-not-shutdown-on-threshold-exceeded is enabled, a graceful shutdown is initiated if used space is above the threshold. The value should be between 6 and 99 (eosio::resource_monitor_plugin)
# resource-monitor-space-threshold = 90

# Used to indicate nodeos will not shutdown when threshold is exceeded. (eosio::resource_monitor_plugin)
# resource-monitor-not-shutdown-on-threshold-exceeded =

# Number of resource monitor intervals between two consecutive warnings when the threshold is hit. Should be between 1 and 450 (eosio::resource_monitor_plugin)
# resource-monitor-warning-interval = 30

# the location of the state-history directory (absolute path or relative to application data dir) (eosio::state_history_plugin)
state-history-dir = "/mnt/eosmain/node/state-history"

# enable trace history (eosio::state_history_plugin)
trace-history = true

# enable chain state history (eosio::state_history_plugin)
chain-state-history = true

# the endpoint upon which to listen for incoming connections. Caution: only expose this port to your internal network. (eosio::state_history_plugin)
state-history-endpoint = 127.0.0.1:8080

# enable debug mode for trace history (eosio::state_history_plugin)
# trace-history-debug-mode = false

# if set, periodically prune the state history files to store only configured number of most recent blocks (eosio::state_history_plugin)
# state-history-log-retain-blocks =

# the location of the trace directory (absolute path or relative to application data dir) (eosio::trace_api_plugin)
# trace-dir = "traces"

# the number of blocks each "slice" of trace data will contain on the filesystem (eosio::trace_api_plugin)
# trace-slice-stride = 10000

# Number of blocks to ensure are kept past LIB for retrieval before "slice" files can be automatically removed.
# A value of -1 indicates that automatic removal of "slice" files will be turned off. (eosio::trace_api_plugin)
# trace-minimum-irreversible-history-blocks = -1

# Number of blocks to ensure are uncompressed past LIB. Compressed "slice" files are still accessible but may carry a performance loss on retrieval
# A value of -1 indicates that automatic compression of "slice" files will be turned off. (eosio::trace_api_plugin)
# trace-minimum-uncompressed-irreversible-history-blocks = -1

# ABIs used when decoding trace RPC responses.
# There must be at least one ABI specified OR the flag trace-no-abis must be used.
# ABIs are specified as "Key=Value" pairs in the form <account-name>=<abi-def>
# Where <abi-def> can be:
#    an absolute path to a file containing a valid JSON-encoded ABI
#    a relative path from `data-dir` to a file containing a valid JSON-encoded ABI
#  (eosio::trace_api_plugin)
# trace-rpc-abi =

# Use to indicate that the RPC responses will not use ABIs.
# Failure to specify this option when there are no trace-rpc-abi configuations will result in an Error.
# This option is mutually exclusive with trace-rpc-api (eosio::trace_api_plugin)
# trace-no-abis =

# Lag in number of blocks from the head block when selecting the reference block for transactions (-1 means Last Irreversible Block) (eosio::txn_test_gen_plugin)
# txn-reference-block-lag = 0

# Number of worker threads in txn_test_gen thread pool (eosio::txn_test_gen_plugin)
# txn-test-gen-threads = 2

# Prefix to use for accounts generated and used by this plugin (eosio::txn_test_gen_plugin)
# txn-test-gen-account-prefix = txn.test.

# Plugin(s) to enable, may be specified multiple times
plugin = eosio::net_plugin
plugin = eosio::http_plugin
plugin = eosio::chain_plugin
plugin = eosio::chain_api_plugin
plugin = eosio::producer_plugin
plugin = eosio::producer_api_plugin
plugin = eosio::state_history_plugin

Configurable limit of unapplied_transactions per account/authorizer

Node-Operators on WAX recently saw issues in block-production related to heavily filled unapplied_transactions-queues.

While still unsure about the exact issues it furthermore looked like huge parts of the queue where filled with failing transactions leading into consumption of a huge amount of transaction-processing time without transactions beeing applied to the block produced.

Talking about the unapplied_transactions-queue we questioned if a configurable limit for unapplied_transactions per account in that queue would help in such scenarios a the lack of a per-account-limit basically allows a single account to push as many trx as possible filling up the queue. Even with subjective-billing enabled a single account would be able to fill up the queue with failing transactions.

Probably "subjective-account-max-failures" ( added in https://github.com/eosnetworkfoundation/mandel/releases/tag/v3.1.0-rc1) kicks in at some point but it's not clear how exactly it works.

An important question is probably what is a sustainable and reliable limit and how to distinguish between authorizers. A co-signing service could legitimately push hundreds of transactions per block and blocking such an account would be against the core design of the protocol and harm non-malicous users and services.

Add more logging of subjective information on non-producing nodes

On a block producing node we have many details logged when producing a block (with debug logging enabled).

To aid in debugging of network issues in non-producing nodes, it would be great if there as some sort of summary per speculative block of how many trxs failed and for what reason and for which accounts.

To ease log scraping, the log format between producing and non-producing nodes should be the same, in an ideal world.

Add basic documentation on logging.json

Add basic documentation on logging.json

  • purpose
  • how to configure
  • HUP to nodeos reloads
  • list of different config blocks and when you might want to use them
  • GELF logging

Recently, transaction_trace_failure: debug was used for debugging WAX issues.

Subjective CPU billing using uint32_t

Objective CPU billing uses an int64_t. uint32_t is limited to 4294967295. It would be beneficial for these types to be the same.
Switch subjective CPU billing to use int64_t.

FC::sha3.hpp: failed to compile on Ubuntu 18.04 with Boost 1.79 when included from benchmark/hash.cpp

In CICD for PR #17, compiling succeeded on Ubuntu 20.04 and 22.0, but failed on Ubuntu 18.04: https://github.com/AntelopeIO/leap/runs/7905995360?check_suite_focus=true#step:4:1206

/usr/bin/g++-8  -DBOOST_ATOMIC_NO_LIB -DBOOST_CHRONO_NO_LIB -DBOOST_DATE_TIME_NO_LIB -DBOOST_FILESYSTEM_NO_LIB -DBOOST_IOSTREAMS_NO_LIB -DBOOST_PROGRAM_OPTIONS_NO_LIB -DOPENSSL_API_COMPAT=0x10100000L -DOPENSSL_NO_DEPRECATED -I../benchmark -I../libraries/fc/include -I../libraries/fc/vendor/websocketpp -I../libraries/fc/libraries/ff/libff/.. -isystem /usr/include/x86_64-linux-gnu -isystem /boost/include -I../libraries/fc/secp256k1/secp256k1 -I../libraries/fc/secp256k1/secp256k1/include -Wall -O3 -DNDEBUG   -fdiagnostics-color=always -pthread -std=gnu++1z -MD -MT benchmark/CMakeFiles/benchmark.dir/hash.cpp.o -MF benchmark/CMakeFiles/benchmark.dir/hash.cpp.o.d -o benchmark/CMakeFiles/benchmark.dir/hash.cpp.o -c ../benchmark/hash.cpp
In file included from ../benchmark/hash.cpp:2:
../libraries/fc/include/fc/crypto/sha3.hpp:105:8: error: 'hash' is not a class template
 struct hash<fc::sha3>
        ^~~~
../libraries/fc/include/fc/crypto/sha3.hpp:106:1: error: explicit specialization of non-template 'boost::hash'

The problem was due to Boost version. Matt suggests

Matt Witherspoon, [Aug 19, 2022 at 5:58:58 PM]:
...sha3.hpp might just need
#include <boost/functional/hash.hpp>
like sha256.hpp has

actually, it seems to compile fine just removing the boost::hash impl in sha3.hpp, so that code may not even be needed at the moment

Revisit names of eos.doxygen.in and eosio.version.in after 3.1.0 stable

eos.doxygen.in and eosio.version.in are left unchanged during the transition to Leap to minimize potential breaking changes. After 3.1.0 stable, we need to investigate any impacts of removing eos and eosio from eos.doxygen and eosio.version.hpp. Nathan's team should be consulted about eos.doxygen.

nodeos_startup_catchup_lr_test failure on Ububtu 20

PR merge failed with following test failure. Looks like a race condition.

https://github.com/AntelopeIO/leap/runs/8045777819?check_suite_focus=true#step:5:192

2022-08-26T22:59:16.436040      Checking if port 9899 is available.
2022-08-26T22:59:16.439659     Checking if port 9899 is available.
2022-08-26T22:59:16.443260    cmd: programs/keosd/keosd --data-dir test_wallet_0 --config-dir test_wallet_0 --http-server-address=localhost:9899 --verbose-http-errors
2022-08-26T22:59:18.447069    Checking if keosd launched. pgrep -a keosd
2022-08-26T22:59:18.451425    Launched keosd. {240 programs/keosd/keosd --data-dir test_wallet_0 --config-dir test_wallet_0 --http-server-address=localhost:9899 --verbose-http-errors
}
2022-08-26T22:59:18.45[190](https://github.com/AntelopeIO/leap/runs/8045777819?check_suite_focus=true#step:5:191)7    cmd: programs/cleos/cleos  --url http://localhost:8888 --wallet-url http://localhost:9899 --no-auto-keosd wallet create --name ignition --to-console
2022-08-26T22:59:18.463844      ERROR: b'Failed http request to keosd at http://localhost:9899; is keosd running?\n  Error: connect: Connection refused\n'
2022-08-26T22:59:18.472022     Checking if port 9899 is available.
2022-08-26T22:59:18.472274     ERROR: Port 9899 is already in use
2022-08-26T22:59:18.472464  Test failed.

main branch ("3.2") is failing pinned build

On ubuntu 22

1 warning generated.
In file included from /Workspaces/leap/unittests/state_history_tests.cpp:5:
In file included from /Workspaces/leap/libraries/state_history/include/eosio/state_history/log.hpp:8:
In file included from /Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread.hpp:13:
In file included from /Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread/thread.hpp:12:
In file included from /Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread/thread_only.hpp:17:
/Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread/pthread/thread_data.hpp:60:5: error: function-like macro '__sysconf' is not defined
#if PTHREAD_STACK_MIN > 0
    ^
/usr/include/x86_64-linux-gnu/bits/pthread_stack_min-dynamic.h:26:30: note: expanded from macro 'PTHREAD_STACK_MIN'
#   define PTHREAD_STACK_MIN __sysconf (__SC_THREAD_STACK_MIN_VALUE)
                             ^
1 error generated.
make[2]: *** [unittests/CMakeFiles/unit_test.dir/build.make:487: unittests/CMakeFiles/unit_test.dir/state_history_tests.cpp.o] Error 1

add PKCS#11 keosd wallet provider

A PKCS#11 wallet provider would allow keosd to operate with many hardware HSMs. Not just the YubiHSM that was removed via #66, but also Yubikey 5, TPM, Amazon CloudHSM, Nitro Key, ultra low cost generic Javacards and many more (it's a wide industry standard). eosnetworkfoundation/mandel#110 goes in to some details as originally this work was planned in to occur in tandem with #66 but it is being pushed out until later.

Remove all references to Mandel

Remove all references to Mandel or mandel from within the antelope-3.1 branch of leap.

Also, update the URLs referencing eosnetworkfoundation org repos to use the corresponding ones in the AntelopeIO organization. This applies also to the LICENSE files of the submodule repos referenced from the antelope-3.1 branch of leap. Note changes to the LICENSE files in the submodules should not be done directly on the main branch. It should be done on either the antelope-main or antelope-3.1 branch (whichever one exists). If neither exists, then create an antelope-main branch off of main and make those changes in the antelope-main branch.

Backport enhanced debug logging from main to 3.1.1

Recently on WAX blockchain there nodes were running with very high CPU utilization and generating blocks with low numbers of transactions.

Additional logging has already been added to the main branch that would have been helpful in debugging the situation. Rather than waiting for the next release, this request is to backport that loggging into the next patch release of 3.1.x

It a new feature, but the logging is off by default. The benefits of this larger change might outweigh the risks.

Nation team is starting to test the new logging features by running the main branch on non-producing, non-critical nodes, but we have not fully tested the changes yet.

Granular control of cpu-effort by block

We currently have the control over the cpu-effort that limits either the first 11 blocks of a round, or only the last block (12th). We have found scenarios where even the 10th-11th blocks were created so large it caused issues getting them to the next BP in the schedule on time, therefore resulting in a 2-3+ block fork each round.

If we had the ability to control each of the blocks in the rounds, or at least more granular of those last couple, it would help BPs greater fine tune their producer/peering settings to minimize network disruptions from larger blocks.

CICD: race condition between build & test jobs view of the source directory

The CICD system does an actions/checkout to get the code to build. Then, later, when doing the tests it actions/checkout again in combination with restoring the builddir -- some tests peek outside the builddir at the source directory so we need both.

Well, actions/checkout operates on the branch name. e.g. it will do something like

/usr/bin/git checkout --progress --force -B minor_build_speed_improvement refs/remotes/origin/minor_build_speed_improvement

This means there is race condition where if someone pushes to the branch between the build and test steps, the source directory won't match that of what the builddir represents during the test step.

Maybe in addition to tarballing up the builddir, both the sourcedir and builddir should be tarballed up and sent downstream to tests.

3.1 not syncing on startup

Reported failure of nodeos failing to sync on startup. Logs were provided and investigated.

Situation: On startup nodeos sends handshakes to peer nodes and attempts to start syncing from node's lib up to peers lib. This process starts a storm of start sync calls which resets the next expected. This reset during sync causes node to get stuck in waiting for the wrong block. Need to revert: eosnetworkfoundation/mandel#627 and determine a better way to unstick a node.

Enhancement: Allow inspection of various queues within nodeos

On a block producing (and other nodes), there are various transactions queued up for various purposes. Pending queue for example on A BP node.

Add producer APIs to allow the inspection of these kind of queues.

Show the transaction id, authorizing account, cpu usage and other interesting things.

This request is vague as I am not sure what information is easy to obtain. The goal is to provide node operators details to manage situation where their nodes are under heavy CPU load and can easily find which account/contract is heavy user and possibly take actions to mitigate the impact of high levels of "abuse".

Compile warnings in trace_api_plugin

Minor warnings:

plugins/trace_api_plugin/compressed_file.cpp:138:66: warning:   comparison of integer  expressions of different signedness: ‘std::__tuple_element_t<0,   std::tuple<long unsigned  int, long unsigned int> >’ {aka ‘long unsigned int’} and ‘long int’ [-Wsign-compare]

plugins/trace_api_plugin/compressed_file.cpp:289:30: warning:   comparison of integer  expressions of different signedness: ‘int’ and ‘const long  unsigned int’ [-Wsign-compare]

trace_api_plugin/trace_api_plugin.cpp:363:12: warning:  variable ‘cfg_options’ set but not used [-Wunused-but-set-variable]

192 compile warnings in fc/secp256k1

fc/secp256k1 has 192 compile warnings. They are in the form of

libraries/fc/secp256k1/secp256k1/src/field.h:44:13: warning: ‘secp256k1_fe_normalize’ declared ‘static’ but never defined [-Wunused-function]
libraries/fc/secp256k1/secp256k1/src/field.h:47:13: warning: ‘secp256k1_fe_normalize_weak’ declared ‘static’ but never defined [-Wunused-function]
...
libraries/fc/secp256k1/secp256k1/src/ecmult_impl.h:395:12: warning: ‘secp256k1_ecmult_strauss_batch_single’ defined but not used [-Wunused-function]

They should be cleaned up after the decision of de-submoduling is made.

Honor max-scheduled-transaction-time-per-block-ms

max-scheduled-transaction-time-per-block-ms - "Maximum wall-clock time, in milliseconds, spent retiring scheduled transactions in any block before returning to normal transaction processing."

Currently if incoming-defer-ratio - "ratio between incoming transactions and deferred transactions when both are queued for execution" (default is 1) is set to >= 1 then incoming transactions are processed during the scheduled-transaction-time-per-block time reducing the number of scheduled transactions that can be processed during max-scheduled-transaction-time-per-block-ms.

Options:

  1. Deprecate incoming-defer-ratio and use full max-scheduled-transaction-time-per-block-ms exclusively for scheduled transaction processing.
  2. Modify process_scheduled_and_incoming_trxs to only consider scheduled transaction processing against the max-scheduled-transaction-time-per-block-ms limit.

Option 1 would simplify eosnetworkfoundation/mandel#297.

Log signed_transactions instead of transactions

Currently the transaction logger in prodcuer_plugin logs transaction type not signed_transaction type.
Consider logging signed_transaction instead which would include signatures and context_free_data which are currently not logged.

deep mind print inconsistency

I'm seeing a very weird bug where I run nodeos with 2 different configs which are both completely commented out and with one I get deep mind prints with the other I dont

nodeos -e -p eosio --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin --plugin eosio::chain_api_plugin --plugin eosio::chain_plugin --plugin eosio::http_plugin --delete-all-blocks -d /home/ubuntu/.zeus/nodeos/data --http-server-address=0.0.0.0:8888 --access-control-allow-origin=* --contracts-console --max-transaction-time=150000 --http-validate-host=false --http-max-response-time-ms=9999999 --verbose-http-errors --trace-history-debug-mode --wasm-runtime=eos-vm-jit --genesis-json=/home/ubuntu/.zeus/nodeos/config/genesis.json --chain-threads=2 --abi-serializer-max-time-ms=100 --max-block-cpu-usage-threshold-us=50000 --eos-vm-oc-enable --eos-vm-oc-compile-threads=2 --deep-mind --block-log-retain-blocks=1000 --disable-subjective-billing=true

Gets me deep mind prints

nodeos -e -p eosio --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin --plugin eosio::chain_api_plugin --plugin eosio::chain_plugin --plugin eosio::http_plugin --delete-all-blocks -d /home/ubuntu/.zeus/nodeos/data --http-server-address=0.0.0.0:8888 --access-control-allow-origin=* --contracts-console --max-transaction-time=150000 --http-validate-host=false --http-max-response-time-ms=9999999 --verbose-http-errors --trace-history-debug-mode --wasm-runtime=eos-vm-jit --genesis-json=/home/ubuntu/.zeus/nodeos/config/genesis.json --chain-threads=2 --abi-serializer-max-time-ms=100 --max-block-cpu-usage-threshold-us=50000 --eos-vm-oc-enable --eos-vm-oc-compile-threads=2 --deep-mind --block-log-retain-blocks=1000 --disable-subjective-billing=true --config-dir /home/ubuntu/.zeus/nodeos/config

Does not.

Diff of config.tomls both 100% commented out https://www.diffchecker.com/d9YYPz5L
Diff of commands and outputs (same order left/right as tomls): https://www.diffchecker.com/gB3nbRRt

nodeos -v
v3.1.0

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.6 LTS
Release:        18.04
Codename:       bionic

Test Contracts Source Files

There are a handful of test contracts in unittests/contracts that are not accompanied by source files for easy indication of versioning or reproduction. Especially with eos-system-contracts living outside the AntelopIO organization, would it be helpful to bring the source across for reference?

They should be clearly marked as example contracts for testing and not for production use and should live in the unittests/* directory.

They can be pulled from: https://github.com/EOSIO/eos/tree/develop/contracts/contracts

Once example contracts' source files are included, they should be compiled when EOSIO_COMPILE_TEST_CONTRACTS is indicated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.