antelopeio / leap Goto Github PK
View Code? Open in Web Editor NEWC++ implementation of the Antelope protocol
License: Other
C++ implementation of the Antelope protocol
License: Other
Update *.swagger.yaml under plugins directory to reflect api changes . Doc change not a functional change.
get_transaction_status
compute_transaction
send_transaction2
get_info
Example
https://github.com/AntelopeIO/leap/blob/main/plugins/chain_api_plugin/chain.swagger.yaml
From Anders:
I've been breaking down how our peering setup is done... I tried to make it as simple and easy to understand as I could.. but it was not a super easy task.
It's also added to this article that I wrote to try to help folks grasp a bit on what is happening.
https://waxsweden.org/why-did-some-of-your-transactions-vanish/
Remove all references to Mandel
or mandel
from within the antelope-3.1
branch of leap
.
Also, update the URLs referencing eosnetworkfoundation
org repos to use the corresponding ones in the AntelopeIO
organization. This applies also to the LICENSE files of the submodule repos referenced from the antelope-3.1
branch of leap
. Note changes to the LICENSE files in the submodules should not be done directly on the main
branch. It should be done on either the antelope-main
or antelope-3.1
branch (whichever one exists). If neither exists, then create an antelope-main
branch off of main
and make those changes in the antelope-main
branch.
On ubuntu 22
1 warning generated.
In file included from /Workspaces/leap/unittests/state_history_tests.cpp:5:
In file included from /Workspaces/leap/libraries/state_history/include/eosio/state_history/log.hpp:8:
In file included from /Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread.hpp:13:
In file included from /Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread/thread.hpp:12:
In file included from /Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread/thread_only.hpp:17:
/Workspaces/leap/dependencies/boost_1_70_0/bin/include/boost/thread/pthread/thread_data.hpp:60:5: error: function-like macro '__sysconf' is not defined
#if PTHREAD_STACK_MIN > 0
^
/usr/include/x86_64-linux-gnu/bits/pthread_stack_min-dynamic.h:26:30: note: expanded from macro 'PTHREAD_STACK_MIN'
# define PTHREAD_STACK_MIN __sysconf (__SC_THREAD_STACK_MIN_VALUE)
^
1 error generated.
make[2]: *** [unittests/CMakeFiles/unit_test.dir/build.make:487: unittests/CMakeFiles/unit_test.dir/state_history_tests.cpp.o] Error 1
Reported failure of nodeos failing to sync on startup. Logs were provided and investigated.
Situation: On startup nodeos sends handshakes to peer nodes and attempts to start syncing from node's lib up to peers lib. This process starts a storm of start sync calls which resets the next expected. This reset during sync causes node to get stuck in waiting for the wrong block. Need to revert: eosnetworkfoundation/mandel#627 and determine a better way to unstick a node.
When proposing an MSIG using cleos
or eosc
when the transaction includes "data": {}
objects, it fails proposing an MSIG.
v3.1.0
v2.1.0
v1.4.0
$ cleos -u https://kylin.api.eosnation.io transfer eosio eosio.null "1.0000 EOS" "propose as MSIG" -s -d --json-file transfer.json --expiration 5000000
transfer.json
transaction
{
"expiration": "2022-11-05T08:26:34",
"ref_block_num": 61319,
"ref_block_prefix": 2545640930,
"max_net_usage_words": 0,
"max_cpu_usage_ms": 0,
"delay_sec": 0,
"context_free_actions": [],
"actions": [{
"account": "eosio.token",
"name": "transfer",
"authorization": [{
"actor": "eosio",
"permission": "active"
}
],
"data": {
"from": "eosio",
"to": "eosio.null",
"quantity": "1.0000 EOS",
"memo": "propose as MSIG"
},
"hex_data": "0000000000ea305500408c7a02ea3055102700000000000004454f53000000000f70726f706f7365206173204d534947"
}
],
"signatures": [],
"context_free_data": []
}
with cleos
$ cleos -u https://kylin.api.eosnation.io multisig propose_trx transfer '[{"actor":"eosio","permission":"active"}]' transfer.json <account>
error 2022-09-08T11:34:45.230 thread-0 main.cpp:4371 operator() ] Failed with error: Bad Cast (7)
Invalid cast from type 'object_type' to string
with eosc
$ eosc -u https://kylin.api.eosnation.io multisig propose <account> transfer transfer.json --request eosio
Enter passphrase to decrypt your vault:
ERROR: signing transaction: get_required_keys: json: error calling MarshalJSON for type *eos.Action: Encode: unsupported type map[string]interface {}
data
{
"expiration": "2022-11-05T08:26:34",
"ref_block_num": 61319,
"ref_block_prefix": 2545640930,
"max_net_usage_words": 0,
"max_cpu_usage_ms": 0,
"delay_sec": 0,
"context_free_actions": [],
"actions": [{
"account": "eosio.token",
"name": "transfer",
"authorization": [{
"actor": "eosio",
"permission": "active"
}
],
"hex_data": "0000000000ea305500408c7a02ea3055102700000000000004454f53000000000f70726f706f7365206173204d534947"
}
],
"signatures": [],
"context_free_data": []
}
eosc
cleos
Error 3015014: Pack data exception
Error Details:
Missing field 'data' in input object while processing struct 'propose.trx.actions[0]'
Lines 158 to 159 in 85a2ecc
This assumption breaks running the test in any other configuration
A PKCS#11 wallet provider would allow keosd to operate with many hardware HSMs. Not just the YubiHSM that was removed via #66, but also Yubikey 5, TPM, Amazon CloudHSM, Nitro Key, ultra low cost generic Javacards and many more (it's a wide industry standard). eosnetworkfoundation/mandel#110 goes in to some details as originally this work was planned in to occur in tandem with #66 but it is being pushed out until later.
Objective CPU billing uses an int64_t
. uint32_t
is limited to 4294967295. It would be beneficial for these types to be the same.
Switch subjective CPU billing to use int64_t
.
We currently have the control over the cpu-effort that limits either the first 11 blocks of a round, or only the last block (12th). We have found scenarios where even the 10th-11th blocks were created so large it caused issues getting them to the next BP in the schedule on time, therefore resulting in a 2-3+ block fork each round.
If we had the ability to control each of the blocks in the rounds, or at least more granular of those last couple, it would help BPs greater fine tune their producer/peering settings to minimize network disruptions from larger blocks.
Today we believe that plugins should follow the following phases
We'd like to evaluate existing plugins to identify edge cases and confirm that we are or aren't broadly adopting this pattern. From there, we can submit an official proposal to adhere to for new plugins.
node operators need to run the producer API to make runtime changes to nodeos when dealing with networks under stress. However, the producer API does not have a dedicated port to listen on.
API node operators are required to do proxy configuration to separate the publicly available APIs from the private APIs. A dedicated port would be much less error prone.
root@iZt4n2zp3wr6kkpibstlf4Z:/coins/eos/main# cleos -u http://127.0.0.1:6666 get transaction-status 2FA3EB63C1E2626F0A5B3238E137830EC088A2F19D14AE0B12BB5F3ED3011493
this is my configuration
# producer-name = !!_YOUR_PRODUCER_NAME_!!!
# signature-provider = YOUR_PUB_KEY_HERE=KEY:YOUR_PRIV_KEY_HERE
http-server-address = 0.0.0.0:6666
p2p-listen-endpoint = 0.0.0.0:9876
p2p-server-address = 172.31.88.66:9876
chain-state-db-size-mb = 26384
#reversible-blocks-db-size-mb = 1024
contracts-console = true
p2p-max-nodes-per-host = 100
chain-threads = 8
http-threads = 6
# eosio2.0
http-max-response-time-ms = 100
#wasm-runtime = wabt
#Only!! for performance eosio 2.0+
eos-vm-oc-compile-threads = 4
eos-vm-oc-enable = 0
wasm-runtime = eos-vm-jit
#END
http-validate-host = false
verbose-http-errors = true
abi-serializer-max-time-ms = 2000
enable-account-queries = true #only for API node
#option from eosio 2.0.7
max-nonprivileged-inline-action-size = 4096
#produce-time-offset-us = 250000
last-block-time-offset-us = -300000
# Safely shut down node when free space
chain-state-db-guard-size-mb = 128
#reversible-blocks-db-guard-size-mb = 2
access-control-allow-origin = *
access-control-allow-headers = Origin, X-Requested-With, Content-Type, Accept
allowed-connection = any
max-clients = 150
connection-cleanup-period = 30
#network-version-match = 0
sync-fetch-span = 2000
enable-stale-production = false
pause-on-startup = false
max-irreversible-block-age = -1
txn-reference-block-lag = 0
plugin = eosio::http_plugin
plugin = eosio::producer_plugin
#plugin = eosio::producer_api_plugin
plugin = eosio::chain_plugin
plugin = eosio::chain_api_plugin
plugin = eosio::trace_api_plugin
transaction-finality-status-max-storage-size-gb = 5
trace-no-abis = true
p2p-peer-address = eos.seed.eosnation.io:9876
p2p-peer-address = peer1.eosphere.io:9876
p2p-peer-address = peer2.eosphere.io:9876
p2p-peer-address = p2p.genereos.io:9876
eos.doxygen.in
and eosio.version.in
are left unchanged during the transition to Leap
to minimize potential breaking changes. After 3.1.0 stable, we need to investigate any impacts of removing eos
and eosio
from eos.doxygen
and eosio.version.hpp
. Nathan's team should be consulted about eos.doxygen
.
I try to get the node height and it returns a response like this:
curl -s -X POST http://127.0.0.1:8888/v1/chain/get_info
{
"code": 429,
"message": "Busy",
"error": {
"code": 429,
"name": "Busy",
"what": "Too many bytes in flight: 897",
"details": []
}
}
log output:
info 2022-08-25T00:59:03.818 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block a715bdc2807485f5... #264435005 @ 2022-08-25T00:59:04.000 signed by eosrapidprod [trxs: 5, lib: 264434672, conf: 0, latency: -181 ms]
info 2022-08-25T00:59:04.316 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block 67fdc4682faad5f6... #264435006 @ 2022-08-25T00:59:04.500 signed by eosrapidprod [trxs: 5, lib: 264434672, conf: 0, latency: -183 ms]
info 2022-08-25T00:59:04.819 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block d562a7213f699348... #264435007 @ 2022-08-25T00:59:05.000 signed by eosrapidprod [trxs: 5, lib: 264434672, conf: 0, latency: -180 ms]
info 2022-08-25T00:59:05.216 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block 56313fe9ce7a0868... #264435008 @ 2022-08-25T00:59:05.500 signed by eosrapidprod [trxs: 3, lib: 264434672, conf: 0, latency: -283 ms]
info 2022-08-25T00:59:05.852 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block 8b219bf51b26089c... #264435009 @ 2022-08-25T00:59:06.000 signed by hashfineosio [trxs: 4, lib: 264434684, conf: 240, latency: -147 ms]
info 2022-08-25T00:59:05.937 net-1 net_plugin.cpp:2943 handle_message ] ["eosn-eos-seed22a:9876 - 402b976" - 2 216.66.68.24:9876] received time_message
info 2022-08-25T00:59:06.634 net-1 net_plugin.cpp:2943 handle_message ] ["p2p.eosflare.io:9876 - b158156" - 5 108.171.210.180:9876] received time_message
info 2022-08-25T00:59:06.778 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block e4c5e56354d3eead... #264435010 @ 2022-08-25T00:59:06.500 signed by hashfineosio [trxs: 2, lib: 264434684, conf: 0, latency: 278 ms]
info 2022-08-25T00:59:06.973 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block ac31b20e4b81560a... #264435011 @ 2022-08-25T00:59:07.000 signed by hashfineosio [trxs: 5, lib: 264434684, conf: 0, latency: -26 ms]
info 2022-08-25T00:59:07.357 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block 64f1bb87009fffee... #264435012 @ 2022-08-25T00:59:07.500 signed by hashfineosio [trxs: 3, lib: 264434684, conf: 0, latency: -142 ms]
info 2022-08-25T00:59:07.843 nodeos producer_plugin.cpp:504 on_incoming_block ] Received block e1f1cbaecb938fbc... #264435013 @ 2022-08-25T00:59:08.000 signed by hashfineosio [trxs: 3, lib: 264434684, conf: 0, latency: -156 ms]
OS Version:Docker Ubuntu 20.04
Node Version: v3.1.0
I started from a snapshot load
/opt/eosmain/core/nodeos --data-dir=/mnt/eosmain/node --config-dir=/mnt/eosmain/conf --snapshot=/mnt/eosmain/snapshot/snapshot-v6-latest.bin --disable-replay-opts
my config file
# the location of the blocks directory (absolute path or relative to application data dir) (eosio::chain_plugin)
blocks-dir = "/mnt/eosmain/node"
# the location of the protocol_features directory (absolute path or relative to application config dir) (eosio::chain_plugin)
# protocol-features-dir = "protocol_features"
# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints. (eosio::chain_plugin)
# checkpoint =
# Override default WASM runtime ( "eos-vm-jit", "eos-vm")
# "eos-vm-jit" : A WebAssembly runtime that compiles WebAssembly code to native x86 code prior to execution.
# "eos-vm" : A WebAssembly interpreter.
# (eosio::chain_plugin)
wasm-runtime = eos-vm-jit
# The name of an account whose code will be profiled (eosio::chain_plugin)
# profile-account =
# Override default maximum ABI serialization time allowed in ms (eosio::chain_plugin)
abi-serializer-max-time-ms = 60000
# Maximum size (in MiB) of the chain state database (eosio::chain_plugin)
chain-state-db-size-mb = 40960
# Safely shut down node when free space remaining in the chain state database drops below this size (in MiB). (eosio::chain_plugin)
chain-state-db-guard-size-mb = 128
# Percentage of actual signature recovery cpu to bill. Whole number percentages, e.g. 50 for 50% (eosio::chain_plugin)
# signature-cpu-billable-pct = 50
# Number of worker threads in controller thread pool (eosio::chain_plugin)
# chain-threads = 2
# print contract's output to console (eosio::chain_plugin)
contracts-console = true
# print deeper information about chain operations (eosio::chain_plugin)
# deep-mind = false
# Account added to actor whitelist (may specify multiple times) (eosio::chain_plugin)
# actor-whitelist =
# Account added to actor blacklist (may specify multiple times) (eosio::chain_plugin)
# actor-blacklist =
# Contract account added to contract whitelist (may specify multiple times) (eosio::chain_plugin)
# contract-whitelist =
# Contract account added to contract blacklist (may specify multiple times) (eosio::chain_plugin)
# contract-blacklist =
# Action (in the form code::action) added to action blacklist (may specify multiple times) (eosio::chain_plugin)
# action-blacklist =
# Public key added to blacklist of keys that should not be included in authorities (may specify multiple times) (eosio::chain_plugin)
# key-blacklist =
# Deferred transactions sent by accounts in this list do not have any of the subjective whitelist/blacklist checks applied to them (may specify multiple times) (eosio::chain_plugin)
# sender-bypass-whiteblacklist =
# Database read mode ("speculative", "head", "read-only", "irreversible").
# In "speculative" mode: database contains state changes by transactions in the blockchain up to the head block as well as some transactions not yet included in the blockchain.
# In "head" mode: database contains state changes by only transactions in the blockchain up to the head block; transactions received by the node are relayed if valid.
# In "read-only" mode: (DEPRECATED: see p2p-accept-transactions & api-accept-transactions) database contains state changes by only transactions in the blockchain up to the head block; transactions received via the P2P network are not relayed and transactions cannot be pushed via the chain API.
# In "irreversible" mode: database contains state changes by only transactions in the blockchain up to the last irreversible block; transactions received via the P2P network are not relayed and transactions cannot be pushed via the chain API.
# (eosio::chain_plugin)
# read-mode = speculative
# Allow API transactions to be evaluated and relayed if valid. (eosio::chain_plugin)
# api-accept-transactions = true
# Chain validation mode ("full" or "light").
# In "full" mode all incoming blocks will be fully validated.
# In "light" mode all incoming blocks headers will be fully validated; transactions in those validated blocks will be trusted
# (eosio::chain_plugin)
# validation-mode = full
# Disable the check which subjectively fails a transaction if a contract bills more RAM to another account within the context of a notification handler (i.e. when the receiver is not the code of the action). (eosio::chain_plugin)
# disable-ram-billing-notify-checks = false
# Subjectively limit the maximum length of variable components in a variable legnth signature to this size in bytes (eosio::chain_plugin)
# maximum-variable-signature-length = 16384
# Indicate a producer whose blocks headers signed by it will be fully validated, but transactions in those validated blocks will be trusted. (eosio::chain_plugin)
# trusted-producer =
# Database map mode ("mapped", "heap", or "locked").
# In "mapped" mode database is memory mapped as a file.
# In "heap" mode database is preloaded in to swappable memory and will use huge pages if available.
# In "locked" mode database is preloaded, locked in to memory, and will use huge pages if available.
# (eosio::chain_plugin)
# database-map-mode = mapped
# Maximum size (in MiB) of the EOS VM OC code cache (eosio::chain_plugin)
# eos-vm-oc-cache-size-mb = 1024
# Number of threads to use for EOS VM OC tier-up (eosio::chain_plugin)
# eos-vm-oc-compile-threads = 1
# Enable EOS VM OC tier-up runtime (eosio::chain_plugin)
eos-vm-oc-enable = true
# enable queries to find accounts by various metadata. (eosio::chain_plugin)
# enable-account-queries = false
# maximum allowed size (in bytes) of an inline action for a nonprivileged account (eosio::chain_plugin)
# max-nonprivileged-inline-action-size = 4096
# Maximum size (in GiB) allowed to be allocated for the Transaction Retry feature. Setting above 0 enables this feature. (eosio::chain_plugin)
# transaction-retry-max-storage-size-gb =
# How often, in seconds, to resend an incoming transaction to network if not seen in a block. (eosio::chain_plugin)
# transaction-retry-interval-sec = 20
# Maximum allowed transaction expiration for retry transactions, will retry transactions up to this value. (eosio::chain_plugin)
# transaction-retry-max-expiration-sec = 120
# Maximum size (in GiB) allowed to be allocated for the Transaction Finality Status feature. Setting above 0 enables this feature. (eosio::chain_plugin)
# transaction-finality-status-max-storage-size-gb =
# Duration (in seconds) a successful transaction's Finality Status will remain available from being first identified. (eosio::chain_plugin)
# transaction-finality-status-success-duration-sec = 180
# Duration (in seconds) a failed transaction's Finality Status will remain available from being first identified. (eosio::chain_plugin)
# transaction-finality-status-failure-duration-sec = 180
# if set, periodically prune the block log to store only configured number of most recent blocks (eosio::chain_plugin)
block-log-retain-blocks = 10240
# PEM encoded trusted root certificate (or path to file containing one) used to validate any TLS connections made. (may specify multiple times)
# (eosio::http_client_plugin)
# https-client-root-cert =
# true: validate that the peer certificates are valid and trusted, false: ignore cert errors (eosio::http_client_plugin)
# https-client-validate-peers = true
# The filename (relative to data-dir) to create a unix socket for HTTP RPC; set blank to disable. (eosio::http_plugin)
# unix-socket-path =
# The local IP and port to listen for incoming http connections; set blank to disable. (eosio::http_plugin)
http-server-address = 0.0.0.0:8888
# The local IP and port to listen for incoming https connections; leave blank to disable. (eosio::http_plugin)
# https-server-address =
# Filename with the certificate chain to present on https connections. PEM format. Required for https. (eosio::http_plugin)
# https-certificate-chain-file =
# Filename with https private key in PEM format. Required for https (eosio::http_plugin)
# https-private-key-file =
# Configure https ECDH curve to use: secp384r1 or prime256v1 (eosio::http_plugin)
# https-ecdh-curve = secp384r1
# Specify the Access-Control-Allow-Origin to be returned on each request. (eosio::http_plugin)
access-control-allow-origin = *
# Specify the Access-Control-Allow-Headers to be returned on each request. (eosio::http_plugin)
# access-control-allow-headers =
# Specify the Access-Control-Max-Age to be returned on each request. (eosio::http_plugin)
# access-control-max-age =
# Specify if Access-Control-Allow-Credentials: true should be returned on each request. (eosio::http_plugin)
# access-control-allow-credentials = false
# The maximum body size in bytes allowed for incoming RPC requests (eosio::http_plugin)
max-body-size = 8388608
# Maximum size in megabytes http_plugin should use for processing http requests. 503 error response when exceeded. (eosio::http_plugin)
http-max-bytes-in-flight-mb = 4096
# Maximum time for processing a request. (eosio::http_plugin)
http-max-response-time-ms = 100000
# Append the error log to HTTP responses (eosio::http_plugin)
verbose-http-errors = true
# If set to false, then any incoming "Host" header is considered valid (eosio::http_plugin)
http-validate-host = false
# Additionaly acceptable values for the "Host" header of incoming HTTP requests, can be specified multiple times. Includes http/s_server_address by default. (eosio::http_plugin)
# http-alias =
# Number of worker threads in http thread pool (eosio::http_plugin)
http-threads = 4096
# The maximum number of pending login requests (eosio::login_plugin)
# max-login-requests = 1000000
# The maximum timeout for pending login requests (in seconds) (eosio::login_plugin)
# max-login-timeout = 60
# The actual host:port used to listen for incoming p2p connections. (eosio::net_plugin)
p2p-listen-endpoint = 0.0.0.0:9876
# An externally accessible host:port for identifying this node. Defaults to p2p-listen-endpoint. (eosio::net_plugin)
# p2p-server-address =
# The public endpoint of a peer node to connect to. Use multiple p2p-peer-address options as needed to compose a network.
# Syntax: host:port[:<trx>|<blk>]
# The optional 'trx' and 'blk' indicates to node that only transactions 'trx' or blocks 'blk' should be sent. Examples:
# p2p.eos.io:9876
# p2p.trx.eos.io:9876:trx
# p2p.blk.eos.io:9876:blk
# (eosio::net_plugin)
# https://mainnet.eosio.online/endpoints,Updated at: August 22, 2022
p2p-peer-address = eos.seed.eosnation.io:9876
p2p-peer-address = eos.edenia.cloud:9876
p2p-peer-address = p2p.eossweden.org:9876
p2p-peer-address = p2p.eosflare.io:9876
p2p-peer-address = peer.main.alohaeos.com:9876
p2p-peer-address = seed.greymass.com:9876
p2p-peer-address = p2p-eos.whaleex.com:9876
p2p-peer-address = peer.eosio.sg:9876
p2p-peer-address = p2p.genereos.io:9876
p2p-peer-address = p2p.eos.detroitledger.tech:1337
# Maximum number of client nodes from any single IP address (eosio::net_plugin)
# p2p-max-nodes-per-host = 1
# Allow transactions received over p2p network to be evaluated and relayed if valid. (eosio::net_plugin)
# p2p-accept-transactions = true
# The name supplied to identify this node amongst the peers. (eosio::net_plugin)
agent-name = "NodeHub-EOS"
# Can be 'any' or 'producers' or 'specified' or 'none'. If 'specified', peer-key must be specified at least once. If only 'producers', peer-key is not required. 'producers' and 'specified' may be combined. (eosio::net_plugin)
allowed-connection = any
# Optional public key of peer allowed to connect. May be used multiple times. (eosio::net_plugin)
# peer-key =
# Tuple of [PublicKey, WIF private key] (may specify multiple times) (eosio::net_plugin)
# peer-private-key =
# Maximum number of clients from which connections are accepted, use 0 for no limit (eosio::net_plugin)
max-clients = 100
# number of seconds to wait before cleaning up dead connections (eosio::net_plugin)
connection-cleanup-period = 60
# max connection cleanup time per cleanup call in milliseconds (eosio::net_plugin)
# max-cleanup-time-msec = 10
# Maximum time to track transaction for duplicate optimization (eosio::net_plugin)
# p2p-dedup-cache-expire-time-sec = 10
# Number of worker threads in net_plugin thread pool (eosio::net_plugin)
# net-threads = 2
# number of blocks to retrieve in a chunk from any individual peer during synchronization (eosio::net_plugin)
sync-fetch-span = 500
# Enable experimental socket read watermark optimization (eosio::net_plugin)
# use-socket-read-watermark = false
# The string used to format peers when logging messages about them. Variables are escaped with ${<variable name>}.
# Available Variables:
# _name self-reported name
#
# _cid assigned connection id
#
# _id self-reported ID (64 hex characters)
#
# _sid first 8 characters of _peer.id
#
# _ip remote IP address of peer
#
# _port remote port number of peer
#
# _lip local IP address connected to peer
#
# _lport local port number connected to peer
#
# (eosio::net_plugin)
# peer-log-format = ["${_name}" - ${_cid} ${_ip}:${_port}]
# peer heartbeat keepalive message interval in milliseconds (eosio::net_plugin)
# p2p-keepalive-interval-ms = 10000
# Enable block production, even if the chain is stale. (eosio::producer_plugin)
enable-stale-production = false
# Start this node in a state where production is paused (eosio::producer_plugin)
# pause-on-startup = false
# Limits the maximum time (in milliseconds) that is allowed a pushed transaction's code to execute before being considered invalid (eosio::producer_plugin)
max-transaction-time = 60000
# Limits the maximum age (in seconds) of the DPOS Irreversible Block for a chain this node will produce blocks on (use negative value to indicate unlimited) (eosio::producer_plugin)
max-irreversible-block-age = -1
# ID of producer controlled by this node (e.g. inita; may specify multiple times) (eosio::producer_plugin)
# producer-name =
# (DEPRECATED - Use signature-provider instead) Tuple of [public key, WIF private key] (may specify multiple times) (eosio::producer_plugin)
# private-key =
# Key=Value pairs in the form <public-key>=<provider-spec>
# Where:
# <public-key> is a string form of a vaild EOSIO public key
#
# <provider-spec> is a string in the form <provider-type>:<data>
#
# <provider-type> is KEY, or KEOSD
#
# KEY:<data> is a string form of a valid EOSIO private key which maps to the provided public key
#
# KEOSD:<data> is the URL where keosd is available and the approptiate wallet(s) are unlocked (eosio::producer_plugin)
# signature-provider = EOS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV=KEY:5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
# Limits the maximum time (in milliseconds) that is allowed for sending blocks to a keosd provider for signing (eosio::producer_plugin)
# keosd-provider-timeout = 5
# account that can not access to extended CPU/NET virtual resources (eosio::producer_plugin)
greylist-account = blocktwitter
greylist-account = chaintwitter
greylist-account = eidosonecoin
# Limit (between 1 and 1000) on the multiple that CPU/NET virtual resources can extend during low usage (only enforced subjectively; use 1000 to not enforce any limit) (eosio::producer_plugin)
# greylist-limit = 1000
# Offset of non last block producing time in microseconds. Valid range 0 .. -block_time_interval. (eosio::producer_plugin)
# produce-time-offset-us = 0
# Offset of last block producing time in microseconds. Valid range 0 .. -block_time_interval. (eosio::producer_plugin)
# last-block-time-offset-us = -200000
# Percentage of cpu block production time used to produce block. Whole number percentages, e.g. 80 for 80% (eosio::producer_plugin)
# cpu-effort-percent = 80
# Percentage of cpu block production time used to produce last block. Whole number percentages, e.g. 80 for 80% (eosio::producer_plugin)
# last-block-cpu-effort-percent = 80
# Threshold of CPU block production to consider block full; when within threshold of max-block-cpu-usage block can be produced immediately (eosio::producer_plugin)
# max-block-cpu-usage-threshold-us = 5000
# Threshold of NET block production to consider block full; when within threshold of max-block-net-usage block can be produced immediately (eosio::producer_plugin)
# max-block-net-usage-threshold-bytes = 1024
# Maximum wall-clock time, in milliseconds, spent retiring scheduled transactions in any block before returning to normal transaction processing. (eosio::producer_plugin)
# max-scheduled-transaction-time-per-block-ms = 100
# Time in microseconds allowed for a transaction that starts with insufficient CPU quota to complete and cover its CPU usage. (eosio::producer_plugin)
# subjective-cpu-leeway-us = 31000
# Sets the maximum amount of failures that are allowed for a given account per block. (eosio::producer_plugin)
# subjective-account-max-failures = 3
# Sets the time to return full subjective cpu for accounts (eosio::producer_plugin)
# subjective-account-decay-time-minutes = 1440
# ratio between incoming transactions and deferred transactions when both are queued for execution (eosio::producer_plugin)
# incoming-defer-ratio = 1
# Maximum size (in MiB) of the incoming transaction queue. Exceeding this value will subjectively drop transaction with resource exhaustion. (eosio::producer_plugin)
# incoming-transaction-queue-size-mb = 1024
# Disable the re-apply of API transactions. (eosio::producer_plugin)
# disable-api-persisted-trx = false
# Disable subjective CPU billing for API/P2P transactions (eosio::producer_plugin)
# disable-subjective-billing = true
# Account which is excluded from subjective CPU billing (eosio::producer_plugin)
# disable-subjective-account-billing =
# Disable subjective CPU billing for P2P transactions (eosio::producer_plugin)
# disable-subjective-p2p-billing = true
# Disable subjective CPU billing for API transactions (eosio::producer_plugin)
# disable-subjective-api-billing = true
# Number of worker threads in producer thread pool (eosio::producer_plugin)
# producer-threads = 2
# the location of the snapshots directory (absolute path or relative to application data dir) (eosio::producer_plugin)
# snapshots-dir = "snapshots"
# Time in seconds between two consecutive checks of resource usage. Should be between 1 and 300 (eosio::resource_monitor_plugin)
# resource-monitor-interval-seconds = 2
# Threshold in terms of percentage of used space vs total space. If used space is above (threshold - 5%), a warning is generated. Unless resource-monitor-not-shutdown-on-threshold-exceeded is enabled, a graceful shutdown is initiated if used space is above the threshold. The value should be between 6 and 99 (eosio::resource_monitor_plugin)
# resource-monitor-space-threshold = 90
# Used to indicate nodeos will not shutdown when threshold is exceeded. (eosio::resource_monitor_plugin)
# resource-monitor-not-shutdown-on-threshold-exceeded =
# Number of resource monitor intervals between two consecutive warnings when the threshold is hit. Should be between 1 and 450 (eosio::resource_monitor_plugin)
# resource-monitor-warning-interval = 30
# the location of the state-history directory (absolute path or relative to application data dir) (eosio::state_history_plugin)
state-history-dir = "/mnt/eosmain/node/state-history"
# enable trace history (eosio::state_history_plugin)
trace-history = true
# enable chain state history (eosio::state_history_plugin)
chain-state-history = true
# the endpoint upon which to listen for incoming connections. Caution: only expose this port to your internal network. (eosio::state_history_plugin)
state-history-endpoint = 127.0.0.1:8080
# enable debug mode for trace history (eosio::state_history_plugin)
# trace-history-debug-mode = false
# if set, periodically prune the state history files to store only configured number of most recent blocks (eosio::state_history_plugin)
# state-history-log-retain-blocks =
# the location of the trace directory (absolute path or relative to application data dir) (eosio::trace_api_plugin)
# trace-dir = "traces"
# the number of blocks each "slice" of trace data will contain on the filesystem (eosio::trace_api_plugin)
# trace-slice-stride = 10000
# Number of blocks to ensure are kept past LIB for retrieval before "slice" files can be automatically removed.
# A value of -1 indicates that automatic removal of "slice" files will be turned off. (eosio::trace_api_plugin)
# trace-minimum-irreversible-history-blocks = -1
# Number of blocks to ensure are uncompressed past LIB. Compressed "slice" files are still accessible but may carry a performance loss on retrieval
# A value of -1 indicates that automatic compression of "slice" files will be turned off. (eosio::trace_api_plugin)
# trace-minimum-uncompressed-irreversible-history-blocks = -1
# ABIs used when decoding trace RPC responses.
# There must be at least one ABI specified OR the flag trace-no-abis must be used.
# ABIs are specified as "Key=Value" pairs in the form <account-name>=<abi-def>
# Where <abi-def> can be:
# an absolute path to a file containing a valid JSON-encoded ABI
# a relative path from `data-dir` to a file containing a valid JSON-encoded ABI
# (eosio::trace_api_plugin)
# trace-rpc-abi =
# Use to indicate that the RPC responses will not use ABIs.
# Failure to specify this option when there are no trace-rpc-abi configuations will result in an Error.
# This option is mutually exclusive with trace-rpc-api (eosio::trace_api_plugin)
# trace-no-abis =
# Lag in number of blocks from the head block when selecting the reference block for transactions (-1 means Last Irreversible Block) (eosio::txn_test_gen_plugin)
# txn-reference-block-lag = 0
# Number of worker threads in txn_test_gen thread pool (eosio::txn_test_gen_plugin)
# txn-test-gen-threads = 2
# Prefix to use for accounts generated and used by this plugin (eosio::txn_test_gen_plugin)
# txn-test-gen-account-prefix = txn.test.
# Plugin(s) to enable, may be specified multiple times
plugin = eosio::net_plugin
plugin = eosio::http_plugin
plugin = eosio::chain_plugin
plugin = eosio::chain_api_plugin
plugin = eosio::producer_plugin
plugin = eosio::producer_api_plugin
plugin = eosio::state_history_plugin
There are a handful of test contracts in unittests/contracts that are not accompanied by source files for easy indication of versioning or reproduction. Especially with eos-system-contracts
living outside the AntelopIO organization, would it be helpful to bring the source across for reference?
They should be clearly marked as example contracts for testing and not for production use and should live in the unittests/*
directory.
They can be pulled from: https://github.com/EOSIO/eos/tree/develop/contracts/contracts
Once example contracts' source files are included, they should be compiled when EOSIO_COMPILE_TEST_CONTRACTS
is indicated.
Followup from: #46
http-max-bytes-in-flight-mb = 4096
is too big, but http-max-bytes-in-flight-mb = 4095
works fine.
If there is a maximum limit, there should be a hard fail. Realistically setting it this big is a bit crazy.
$ cleos get info
error 2022-08-25T13:45:41.077 cleos main.cpp:4121 operator() ] Failed with error: Too many bytes in flight: 897 (429)
http-max-bytes-in-flight-mb
The comment in the default config is wrong, it returns 429, not 503 when exceeding the limit.
# Maximum size in megabytes http_plugin should use for processing http requests. 503 error response when exceeded. (eosio::http_plugin)
Originally posted by @matthewdarwin in #46 (comment)
Duplicate eos-system-contracts as vanilla reference-contracts version in Antelope org
First time saw block_log_retain_blocks_test
failed in CICD: https://github.com/AntelopeIO/leap/runs/8236888891?check_suite_focus=true
[2022-09-07T20:41:10.109869 File "/__w/leap/leap/build/tests/block_log_retain_blocks_test.py", line 64, in <module>
errorExit("Cluster never stabilized")]
Following design in eosnetworkfoundation/product#72.
I'm seeing a very weird bug where I run nodeos with 2 different configs which are both completely commented out and with one I get deep mind prints with the other I dont
nodeos -e -p eosio --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin --plugin eosio::chain_api_plugin --plugin eosio::chain_plugin --plugin eosio::http_plugin --delete-all-blocks -d /home/ubuntu/.zeus/nodeos/data --http-server-address=0.0.0.0:8888 --access-control-allow-origin=* --contracts-console --max-transaction-time=150000 --http-validate-host=false --http-max-response-time-ms=9999999 --verbose-http-errors --trace-history-debug-mode --wasm-runtime=eos-vm-jit --genesis-json=/home/ubuntu/.zeus/nodeos/config/genesis.json --chain-threads=2 --abi-serializer-max-time-ms=100 --max-block-cpu-usage-threshold-us=50000 --eos-vm-oc-enable --eos-vm-oc-compile-threads=2 --deep-mind --block-log-retain-blocks=1000 --disable-subjective-billing=true
Gets me deep mind prints
nodeos -e -p eosio --plugin eosio::producer_plugin --plugin eosio::producer_api_plugin --plugin eosio::chain_api_plugin --plugin eosio::chain_plugin --plugin eosio::http_plugin --delete-all-blocks -d /home/ubuntu/.zeus/nodeos/data --http-server-address=0.0.0.0:8888 --access-control-allow-origin=* --contracts-console --max-transaction-time=150000 --http-validate-host=false --http-max-response-time-ms=9999999 --verbose-http-errors --trace-history-debug-mode --wasm-runtime=eos-vm-jit --genesis-json=/home/ubuntu/.zeus/nodeos/config/genesis.json --chain-threads=2 --abi-serializer-max-time-ms=100 --max-block-cpu-usage-threshold-us=50000 --eos-vm-oc-enable --eos-vm-oc-compile-threads=2 --deep-mind --block-log-retain-blocks=1000 --disable-subjective-billing=true --config-dir /home/ubuntu/.zeus/nodeos/config
Does not.
Diff of config.tomls both 100% commented out https://www.diffchecker.com/d9YYPz5L
Diff of commands and outputs (same order left/right as tomls): https://www.diffchecker.com/gB3nbRRt
nodeos -v
v3.1.0
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
On a block producing (and other nodes), there are various transactions queued up for various purposes. Pending queue for example on A BP node.
Add producer APIs to allow the inspection of these kind of queues.
Show the transaction id, authorizing account, cpu usage and other interesting things.
This request is vague as I am not sure what information is easy to obtain. The goal is to provide node operators details to manage situation where their nodes are under heavy CPU load and can easily find which account/contract is heavy user and possibly take actions to mitigate the impact of high levels of "abuse".
Recently on WAX blockchain there nodes were running with very high CPU utilization and generating blocks with low numbers of transactions.
Additional logging has already been added to the main branch that would have been helpful in debugging the situation. Rather than waiting for the next release, this request is to backport that loggging into the next patch release of 3.1.x
It a new feature, but the logging is off by default. The benefits of this larger change might outweigh the risks.
Nation team is starting to test the new logging features by running the main branch on non-producing, non-critical nodes, but we have not fully tested the changes yet.
On a block producing node we have many details logged when producing a block (with debug logging enabled).
To aid in debugging of network issues in non-producing nodes, it would be great if there as some sort of summary per speculative block of how many trxs failed and for what reason and for which accounts.
To ease log scraping, the log format between producing and non-producing nodes should be the same, in an ideal world.
max-scheduled-transaction-time-per-block-ms
- "Maximum wall-clock time, in milliseconds, spent retiring scheduled transactions in any block before returning to normal transaction processing."
Currently if incoming-defer-ratio
- "ratio between incoming transactions and deferred transactions when both are queued for execution"
(default is 1) is set to >= 1
then incoming transactions are processed during the scheduled-transaction-time-per-block time reducing the number of scheduled transactions that can be processed during max-scheduled-transaction-time-per-block-ms
.
Options:
incoming-defer-ratio
and use full max-scheduled-transaction-time-per-block-ms
exclusively for scheduled transaction processing.process_scheduled_and_incoming_trxs
to only consider scheduled transaction processing against the max-scheduled-transaction-time-per-block-ms
limit.Option 1 would simplify eosnetworkfoundation/mandel#297.
Copy from eosnetworkfoundation/mandel#67
Node-Operators on WAX recently saw issues in block-production related to heavily filled unapplied_transactions-queues.
While still unsure about the exact issues it furthermore looked like huge parts of the queue where filled with failing transactions leading into consumption of a huge amount of transaction-processing time without transactions beeing applied to the block produced.
Talking about the unapplied_transactions-queue we questioned if a configurable limit for unapplied_transactions per account in that queue would help in such scenarios a the lack of a per-account-limit basically allows a single account to push as many trx as possible filling up the queue. Even with subjective-billing enabled a single account would be able to fill up the queue with failing transactions.
Probably "subjective-account-max-failures" ( added in https://github.com/eosnetworkfoundation/mandel/releases/tag/v3.1.0-rc1) kicks in at some point but it's not clear how exactly it works.
An important question is probably what is a sustainable and reliable limit and how to distinguish between authorizers. A co-signing service could legitimately push hundreds of transactions per block and blocking such an account would be against the core design of the protocol and harm non-malicous users and services.
Currently the transaction
logger in prodcuer_plugin
logs transaction
type not signed_transaction
type.
Consider logging signed_transaction
instead which would include signatures
and context_free_data
which are currently not logged.
https://github.com/AntelopeIO/leap/blob/main/plugins/producer_api_plugin/producer.swagger.yaml#L330
This is correct:
curl -d '{"actor_whitelist":[],"actor_blacklist":[],"contract_whitelist":[],"contract_blacklist":[],"action_blacklist":[],"key_blacklist":[]}' http://127.0.0.1:8888/v1/producer/set_whitelist_blacklist
There is no "params".
fc/secp256k1 has 192 compile warnings. They are in the form of
libraries/fc/secp256k1/secp256k1/src/field.h:44:13: warning: ‘secp256k1_fe_normalize’ declared ‘static’ but never defined [-Wunused-function]
libraries/fc/secp256k1/secp256k1/src/field.h:47:13: warning: ‘secp256k1_fe_normalize_weak’ declared ‘static’ but never defined [-Wunused-function]
...
libraries/fc/secp256k1/secp256k1/src/ecmult_impl.h:395:12: warning: ‘secp256k1_ecmult_strauss_batch_single’ defined but not used [-Wunused-function]
They should be cleaned up after the decision of de-submoduling is made.
In the background do a
nc -l -p 9876
to tie up the port.
Then just
./nodeos
...
info 2022-08-26T18:37:07.548 nodeos resource_monitor_plugi:94 plugin_startup ] Creating and starting monitor thread
info 2022-08-26T18:37:07.549 nodeos file_space_handler.hpp:112 add_file_system ] /root/.local/share/eosio/nodeos/data/blocks's file system monitored. shutdown_available: 50010786200, capacity: 500107862016, threshold: 90
warn 2022-08-26T18:37:07.549 resmon file_space_handler.hpp:66 is_threshold_exceede ] Space usage warning: /root/.local/share/eosio/nodeos/data/blocks's file system approaching threshold. available: 74965753856, warning_available: 75016179300
warn 2022-08-26T18:37:07.549 resmon file_space_handler.hpp:68 is_threshold_exceede ] nodeos will shutdown when space usage exceeds threshold 90%
error 2022-08-26T18:37:07.549 nodeos net_plugin.cpp:3746 operator() ] net_plugin::plugin_startup failed to bind to port 9876
error 2022-08-26T18:37:07.549 nodeos main.cpp:174 main ] std::exception
terminate called without an active exception
Aborted (core dumped)
This crash occurs early enough that the DB is left dirty.
v3.1.0-50894acec3991dc108516089be52c47d61eabf88
(this is 3.1.0 post CI branch protection modification)
Minor warnings:
plugins/trace_api_plugin/compressed_file.cpp:138:66: warning: comparison of integer expressions of different signedness: ‘std::__tuple_element_t<0, std::tuple<long unsigned int, long unsigned int> >’ {aka ‘long unsigned int’} and ‘long int’ [-Wsign-compare]
plugins/trace_api_plugin/compressed_file.cpp:289:30: warning: comparison of integer expressions of different signedness: ‘int’ and ‘const long unsigned int’ [-Wsign-compare]
trace_api_plugin/trace_api_plugin.cpp:363:12: warning: variable ‘cfg_options’ set but not used [-Wunused-but-set-variable]
eosio-blocklog
can convert a blocks.log to JSON, but there is no support for fork_db.dat
. It would be a helpful debug capability to convert fork_db.dat
to JSON.
Minor warnings:
libraries/chain/deep_mind.cpp:46:4: warning: control reaches end of non-void function [-Wreturn-type]
libraries/chain/abi_serializer.cpp:80:42: warning: ‘void eosio::chain::abi_serializer::set_abi(const eosio::chain::abi_def&, const fc::microseconds&)’ is deprecated: use the overload with yield_function_t[=create_yield_function(max_serialization_time)] [-Wdeprecated-declarations]
plugins/producer_plugin/producer_plugin.cpp:1959:43: warning: comparison of integer expressions of different signedness: ‘uint64_t’ {aka ‘long unsigned int’} and ‘int64_t’ {aka ‘long int’} [-Wsign-compare]
1959 | if( prev_billed_plus100_us < max_trx_time.count() ) max_trx_time = fc::microseconds( prev_billed_plus100_us );
unittests/auth_tests.cpp:235:18: warning: variable ‘new_owner_priv_key’ set but not used [-Wunused-but-set-variable]
In CICD for PR #17, compiling succeeded on Ubuntu 20.04 and 22.0, but failed on Ubuntu 18.04: https://github.com/AntelopeIO/leap/runs/7905995360?check_suite_focus=true#step:4:1206
/usr/bin/g++-8 -DBOOST_ATOMIC_NO_LIB -DBOOST_CHRONO_NO_LIB -DBOOST_DATE_TIME_NO_LIB -DBOOST_FILESYSTEM_NO_LIB -DBOOST_IOSTREAMS_NO_LIB -DBOOST_PROGRAM_OPTIONS_NO_LIB -DOPENSSL_API_COMPAT=0x10100000L -DOPENSSL_NO_DEPRECATED -I../benchmark -I../libraries/fc/include -I../libraries/fc/vendor/websocketpp -I../libraries/fc/libraries/ff/libff/.. -isystem /usr/include/x86_64-linux-gnu -isystem /boost/include -I../libraries/fc/secp256k1/secp256k1 -I../libraries/fc/secp256k1/secp256k1/include -Wall -O3 -DNDEBUG -fdiagnostics-color=always -pthread -std=gnu++1z -MD -MT benchmark/CMakeFiles/benchmark.dir/hash.cpp.o -MF benchmark/CMakeFiles/benchmark.dir/hash.cpp.o.d -o benchmark/CMakeFiles/benchmark.dir/hash.cpp.o -c ../benchmark/hash.cpp
In file included from ../benchmark/hash.cpp:2:
../libraries/fc/include/fc/crypto/sha3.hpp:105:8: error: 'hash' is not a class template
struct hash<fc::sha3>
^~~~
../libraries/fc/include/fc/crypto/sha3.hpp:106:1: error: explicit specialization of non-template 'boost::hash'
The problem was due to Boost version. Matt suggests
Matt Witherspoon, [Aug 19, 2022 at 5:58:58 PM]:
...sha3.hpp might just need
#include <boost/functional/hash.hpp>
like sha256.hpp has
actually, it seems to compile fine just removing the boost::hash impl in sha3.hpp, so that code may not even be needed at the moment
Add basic documentation on logging.json
Recently, transaction_trace_failure: debug
was used for debugging WAX issues.
Nodeos stops with following error:
nodeos file_space_handler.hpp:112 add_file_system ] /var/lib/nodeos/blocks's file system monitored. shutdown_available: 5259198460, capacity: 52591984640, threshold: 90
resmon file_space_handler.hpp:62 is_threshold_exceede ] Space usage warning: /var/lib/nodeos/blocks's file system exceeded threshold 90%, available: 5215444992, Capacity: 52591984640, shutdown_available: 5259198460
[etc...]
info 2022-08-26T17:02:35.539 nodeos main.cpp:181 main ] nodeos successfully exiting
Then if you start nodeos again, it goes all through the process of loading all the blockchain state (which takes a while with database-map-mode = heap
). Once that it is done, then it again exits.
When you have nodeos running under some sort of thing that just restarts it when it exits (eg systemd), then this just burns CPU and Disk I/O.
Two improvements if not sufficient disk space:
There is nothing "ok" about this request:
curl -d '{"foo":["bar"]}' http://127.0.0.1:8888/v1/producer/set_whitelist_blacklist
{"result":"ok"}
Input validation is required to ensure what is being submitted actually changes configuration.
PR merge failed with following test failure. Looks like a race condition.
https://github.com/AntelopeIO/leap/runs/8045777819?check_suite_focus=true#step:5:192
2022-08-26T22:59:16.436040 Checking if port 9899 is available.
2022-08-26T22:59:16.439659 Checking if port 9899 is available.
2022-08-26T22:59:16.443260 cmd: programs/keosd/keosd --data-dir test_wallet_0 --config-dir test_wallet_0 --http-server-address=localhost:9899 --verbose-http-errors
2022-08-26T22:59:18.447069 Checking if keosd launched. pgrep -a keosd
2022-08-26T22:59:18.451425 Launched keosd. {240 programs/keosd/keosd --data-dir test_wallet_0 --config-dir test_wallet_0 --http-server-address=localhost:9899 --verbose-http-errors
}
2022-08-26T22:59:18.45[190](https://github.com/AntelopeIO/leap/runs/8045777819?check_suite_focus=true#step:5:191)7 cmd: programs/cleos/cleos --url http://localhost:8888 --wallet-url http://localhost:9899 --no-auto-keosd wallet create --name ignition --to-console
2022-08-26T22:59:18.463844 ERROR: b'Failed http request to keosd at http://localhost:9899; is keosd running?\n Error: connect: Connection refused\n'
2022-08-26T22:59:18.472022 Checking if port 9899 is available.
2022-08-26T22:59:18.472274 ERROR: Port 9899 is already in use
2022-08-26T22:59:18.472464 Test failed.
The CICD system does an actions/checkout
to get the code to build. Then, later, when doing the tests it actions/checkout
again in combination with restoring the builddir -- some tests peek outside the builddir at the source directory so we need both.
Well, actions/checkout
operates on the branch name. e.g. it will do something like
/usr/bin/git checkout --progress --force -B minor_build_speed_improvement refs/remotes/origin/minor_build_speed_improvement
This means there is race condition where if someone pushes to the branch between the build and test steps, the source directory won't match that of what the builddir represents during the test step.
Maybe in addition to tarballing up the builddir, both the sourcedir and builddir should be tarballed up and sent downstream to tests.
While Node-Operators on WAX recently saw issues in block-production many had to restart nodeos-processes frequently to update specific configuration-parameters like subjective-billing, cpu-efforts, greylists/blacklists/whitelists etc.
A common approach is to
1.) Change the configuration (config.ini)
2.) Kill the nodeos-process after successfully producing a round
3.) Restart immediately to have nodeos running and catched up again when the next round needs to be produced on this producer-node
This approach seems to be unnecessarily complex and error-prone, a possible improvement could be the implementation of a hot-reload on config-change for parameters that can be changed at runtime as an alternative to additional RPC-API-functionallity which always comes with drawbacks like accidentially exposing endpoints to the public.
I would really dig this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.