Giter Site home page Giter Site logo

qdrvm / kagome Goto Github PK

View Code? Open in Web Editor NEW
153.0 34.0 33.0 50.84 MB

Kagome - C++20 implementation of Polkadot Host

Home Page: https://kagome.readthedocs.io

License: Apache License 2.0

CMake 2.44% C++ 95.88% C 0.21% Shell 0.36% Dockerfile 0.20% Makefile 0.36% Python 0.22% JavaScript 0.19% TypeScript 0.02% WebAssembly 0.12%
polkadot cpp libp2p wasm cpp20

kagome's Introduction

logo

codecov

Intro

KAGOME is a Polkadot Host (former Polkadot Runtime Environment) developed by Quadrivim and funded by a Web3 Foundation grant and Treasury proposals ( 1, 2).

Status

kagome-components-Host drawio-light

  • JSON-RPC (compatible with Polkadot JS)
  • Scale codec
  • Synchronizer
    • Full sync
    • Fast sync
    • Warp sync
  • Transaction pool
  • Consensus
    • BABE
    • GRANDPA
    • SASSAFRAS
  • Storage
    • Blockchain
      • Block storage
      • Block tree
      • Digest tracker
    • Trie storage (merkle trie)
    • RocksDB
    • Dynamic pruning
    • Trie nodes caches
    • State caches
  • Runtime
    • Host API
    • WASM engine
      • Binaryen
      • WAVM
      • WasmEdge
  • Parachains core
    • Non-asynchronous Backing
    • Data availability
    • Approval voting
    • Disputes resolution
    • Async-Backing
    • Elastic scaling
  • Networking
    • Peer manager
      • /dot/block-announces/1
      • /paritytech/grandpa/1
      • /polkadot/validation/1
      • /polkadot/collation/1
      • /dot/transactions/1
      • /polkadot/req_collation/1
      • /dot/light/2
      • /polkadot/req_pov/1
      • /dot/state/2
      • /dot/sync/warp
      • /polkadot/req_statement/1
      • /dot/sync/2
      • /polkadot/req_available_data/1
      • /polkadot/req_chunk/1
      • /polkadot/send_dispute/1
    • Libp2p
      • Transport
        • TCP
        • QUIC
        • WebRTC
      • Secure connection
        • Noise
        • TLS
      • Multiplexing
        • Yamux
      • Multiselect protocol
      • Peer discovery
        • Kademlia
      • Ping protocol
      • Identify protocol
  • Offchain workers
  • Keystore
  • Telemetry support
  • Prometheus metrics

More details of KAGOME development can be found within the supported features section and in projects board

Getting Started

Build

Prerequisites

If you are using a Debian Linux system, the following command allows you to build KAGOME:

git clone https://github.com/qdrvm/kagome
cd kagome
sudo chmod +x scripts/init.sh scripts/build.sh

sudo ./scripts/init.sh
./scripts/build.sh

You will get KAGOME binary in the build/node/ folder

Other make commands are:

make docker
make command args="gcc --version"
make release
make release_docker
make debug_docker
make clear

Using KAGOME

Obtaining database snapshot (optional)

In order to avoid syncing from scratch we are maintaining the most recent snapshot of Polkadot network for KAGOME node available for anyone here: https://drive.google.com/drive/folders/1pAZ1ongWB3_zVPKXvgOo-4aBB7ybmKy5?usp=sharing

After downloading the snapshot you can extract it in the folder where the node will be running:

unzip polkadot-node-1.zip

Execute KAGOME Polkadot full syncing node

You can synchronize with Polkadot using KAGOME and obtain an archive node that can be used to query the Polkadot network at any state.

To launch KAGOME Polkadot syncing node execute:

cd examples/polkadot/
PATH=$PATH:../../build/node/
kagome --chain polkadot.json --base-path polkadot-node-1

Note: If you start KAGOME for the first time, you can add the --sync Fast flag to synchronize using Fast sync

After this command KAGOME will connect with other nodes in the network and start importing blocks. You may play with your local node using polkadot js apps: https://polkadot.js.org/apps/?rpc=ws%3A%2F%2F127.0.0.1%3A9944#/explorer

You will also be able to see your node on https://telemetry.polkadot.io/. If you need to identify it more easily you can add --name <node-name> flag to node's execution command and find your node in telemetry by typing its name.

Run kagome --help to explore other CLI flags.

Execute KAGOME block_validator node in development mode

The easiest way to get started with KAGOME is to run it in development mode, which is a single node network:

kagome --dev

That executes node with default accounts Alice and Bob. You can read about these accounts here.

To launch with wiping existing data you can do:

kagome --dev-with-wipe

Run KAGOME node in validator mode

To start the KAGOME validator:

cd examples/first_kagome_chain
PATH=$PATH:../../build/node/
kagome --validator --chain localchain.json --base-path base_path

This command executes a KAGOME full node with an authority role.

Run KAGOME with collator

Read this tutorial

Configuration Details

To run a KAGOME node, you need to provide it with a genesis config, cryptographic keys, and a place to store db files.

  • Example of a genesis config file can be found in examples/first_kagome_chain/localchain.json
  • Example of a base path dir can be found in examples/first_kagome_chain/base_path
  • To create database files, just provide any base path into kagome executable (mind that start with authority role requires keys to start).

Contributing Guides

Please refer to the Contributor Documentation.

Modules

  • api
    • JSON-RPC based on specification from PSP and Polkadot JS documentation. Uses HTTP and WebSockets for transport.
  • application
    • Implements logic for running KAGOME node, such as processing CLI flags and defining an execution order of different modules
  • assets
    • Artifacts needed to run in development mode
  • authority_discovery
    • Logic for finding peer information by authority id provided
  • authoriship
    • Mechanism for building blocks from extrinsics provided by the transaction pool
  • blockchain
    • Implements blockchain logic such as fork handling and block digest information processing
  • clock
    • Implements a clock interface that is used to access the system time
    • Implements timer interface that is used to schedule events
  • common
    • A set of miscellaneous primitives, data structures, and helpers that are widely used in the project to simplify development
  • consensus
    • Implementation of BABE block production mechanism
    • Implementation of Grandpa finality gadget
  • containers
    • An implementation of a container that serves as an allocator for frequently created objects
  • crypto
    • Crypto primitives implementation such as DSAs, Hashers, VRF, Random generators
  • filesystem
    • A convenient interface for working with directories in the project
  • host_api
    • Host APIs exposed by the Polkadot runtime as WASM import functions needed to handle storage, cryptography, and memory
  • injector
  • log
    • A configuration of KAGOME logger
  • macro
    • Convenience macros
  • metrics
    • Prometheus metrics to retrieve KAGOME node execution statistics
  • network
    • A set of networking substream protocols implementation on top of cpp-libp2p library
  • offchain
  • outcome
  • parachain
    • Parachains logic such as collation protocol, backing, availability and validity
  • runtime
    • Integration of Binaryen and WAVM WebAssembly engines with Host APIs
  • scale
    • Scale codec for some primitives
  • storage
    • Storage and trie interfaces with RocksDB implementation
  • subscription
    • Subscription engine
  • telemetry
  • transaction_pool
    • Pool of transactions to be included into the block
  • utils
    • Utils such as profiler, pruner, thread pool

You can find more information about the components by checking reference documentation. Check out tutorials and more examples in official documentation: https://kagome.readthedocs.io/

KAGOME in media

kagome's People

Contributors

akvinikym avatar bulatsaif avatar cre-ed avatar crimeatop avatar dependabot[bot] avatar elestrias avatar esganesh avatar exctues avatar florianfranzen avatar garorobe avatar harrm avatar iceseer avatar igkrsh avatar igor-egorov avatar kamilsa avatar kogeler avatar l4l avatar lederstrumpf avatar liralemur avatar masterjedy avatar mateomoon avatar neewy avatar ortyomka avatar ravenolf avatar safinsaf avatar sanblch avatar turuslan avatar warchant avatar xdimon avatar zerg-su avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kagome's Issues

Web sockets support in libp2p

Web sockets

We should be able to connect to the libp2p nodes that use websockets as transport. Their multiaddress can be defined as follows: /dns/kusama-bootnode-0.paritytech.net/tcp/30334/ws/p2p/QmTFUXWi98EADXdsUxvv7t9fhJG1XniRijahDXxdv1EbAW

  • Implement ws connection (implement RawConnection interface) ``` - โ˜

Please note while implementing https://github.com/libp2p/go-ws-transport#security-and-multiplexing

  • Implement ws transport adaptor (Implement transport adaptor interface)

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-51-4

Update runtime api tests by loading runtime

However, another thing that just came up on our side is that the current hostapi testsuite for kagome calls the hostapi directly and not from within wasm. That way we will also test the wasm interaction with external functions on your host. The runtime for this can be provided as a static C/C++ export symbol or as a binary file and can be currently found here.

How would you go about loading a runtime in kagome and then call "into" it? I moved the setup of the wasm environment into a helper, so that will be a part that needs updating. Secondly we will have to update all the calls extension->ext_... to actually call a runtime function (i.e. prefixed with rtm_).

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-52-5

Ephemeral streams

A Polkadot Host node should open several substreams. In particular, it should periodically open ephemeral substreams in order to:

  • ping the remote peer and check whether the connection is still alive. Failure for the remote peer to reply leads to a disconnection. This uses the libp2p ping protocol specified in [lab19].
  • ask information from the remote.
  • send Kademlia random walk queries. Each Kademlia query is done in a new separate substreams. This

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-51-5

Polkadot JS compatibility

Support for Polkadot JS in Kagome will help community become familiar with the C++ Polkadot Host. It will also be easier to prepare and run tutorials similar to this

  • state subscribe storage
    #374

  • state_getRuntimeVersion
    #375

  • System apis
    #376

  • rpc_methods
    #377

  • state_subscribeRuntimeVersion
    #378

  • state_getMetadata
    #379

  • chain_subscribeNewHead
    #380

  • system_health
    #381

  • Execute RPC in a separate thread
    #425

  • WebSockets pubsub
    Prepare foundation to write pubsub RPCs

  • Test RPC pubsub
    #469

  • chain_getHeader
    #498

  • system_chainType
    #501

  • state_getKeysPaged
    #502

  • Transaction APIs
    #515

  • chain_subscribe_FinalizedHeads
    #568

  • Insert key
    #578

  • Submit and watch extrinsic
    #574

  • system_peers
    #601

  • chain_getBlock implementation
    #598

  • Update pub/sub rpc to generate event on subscribe
    #584

  • Unsubscribe fix
    #597

Aha! Link: https://soramitsucoltd.aha.io/features/PRE-58

SASSY/BADASS Babe

color: Color value is invalid

BADASS BABE is a constant-time block production protocol. It intends to ensure that there is exactly one block produced with constant-time intervals rather than multiple or none. It extends on BABE to address this shortcoming of BABE.

  • Unix slots
    Slot 0 should correspond to the the beginning of Unix epoch (1970-1-1, 00-00-00). Therefore when Babe is executed genesis slot should be calculated as /

Aha! Link: https://soramitsucoltd.aha.io/features/PRE-47

Update Host API

Web 3 refactored the Runtime APIs therefore, the APIs need to be updated in Kagome implementation

  • Append existing runtime APIs with _version1
    #364

  • ext_storage_next_key
    #365

  • ext_storage_read
    #366

  • ext_crypto_ed25519* runtime externals
    #367

  • ext_crypto_sr25519* externals
    #368

  • ext_crypto_secp256k1_ecdsa* externals
    #369

  • runtime_version
    #370

  • ext_logging_log_version_1
    #371

  • Implement Polkadot Trie Cursor
    #372

  • New key storage
    #373

  • Cross-implementation tests
    #438

  • Replace old keystorage with a new one
    #442

  • fix return value of recover secp256k1 methods
    #450

  • Child trie
    https://github.com/soramitsu/kagome/issues/465

  • Child trie host apis
    #467

  • Transaction APIs
    #515

  • Batch verify apis
    #516

  • ext_storage_append
    #521

Aha! Link: https://soramitsucoltd.aha.io/features/PRE-60

ext_storage_next_key

Gets the next key in storage after the given one in lexicographic order. Use cursor for implementation

E.1.9.1 Version 1 - Prototype

(func $ext_storage_next_key_version_1 (param $key i64) (return i64))

Arguments :

  • key: a pointer-size as defined in Definition E.2 indicating the key.
  • return: a pointer-size as defined in Definition E.2 indicating the SCALE encoded Option as defined in Definition B.4 containing the next key in lexicographic order.

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-43-2

ext_crypto_sr25519* externals

  1. ext_crypto_sr25519_public_keys Returns all sr25519 public keys for the given key id from the keystore.
  2. ext_crypto_sr25519_generate Generates an sr25519 key for the given key type using an optional BIP-39 seed and stores it in the keystore.
  3. ext_crypto_sr25519_sign Signs the given message with the sr25519 key that corresponds to the given public key and key type in the keystore.
  4. ext_crypto_sr25519_verify Verifies an sr25519 signature. Only version 1 of this function supports deprecated Schnorr signatures introduced by the schnorrkel Rust library version 0.1.1 and should only be used for backward compatibility.

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-43-5

runtime_version

Extract the Runtime version of the given Wasm blob by calling Core_version as defined in Definition G.2.1. Returns the SCALE encoded runtime version or None as defined in Definition B.4 if the call fails. This function gets primarily used when upgrading Runtimes.

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-43-7

ext_crypto_ed25519* runtime externals

  1. ext_crypto_ed25519_public_keys Returns all ed25519 public keys for the given key id from the keystore.
  2. ext_crypto_ed25519_generate Generates an ed25519 key for the given key type using an optional BIP-39 seed and stores it in the keystore.
  3. ext_crypto_ed25519_sign Signs the given message with the ed25519 key that corresponds to the given public key and key type in the keystore.
  4. ext_crypto_ed25519_verify Verifies an ed25519 signature.

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-43-4

Sr25519 (Continued...)

Hello!

To continue discussion of Sr25519, what is your plan about this library? Will you keep using linked Rust library or have plans to port it to C++ for better portability?

Offchain worker

Offchain workers are needed to process off-chain data before including it into the on-chain state. Offchain workers require their own WASM execution runtime separate from Polkadot Runtime. To make it work the following APIs should be supported:

  1. Local storage: get and set
  2. HTTP request
  3. Random seed generation
  4. Timestamp
  5. Sleep

Aha! Link: https://soramitsucoltd.aha.io/features/PRE-48

Append existing runtime APIs with _version1

All legacy runtime APIs were updated to have _version1 postfix. Refactor correspondingly

  • ext_hashing_keccak_256_version_1,
  • ext_hashing_sha2_256_version_1,
  • ext_hashing_blake2_128_version_1,
  • ext_hashing_blake2_256_version_1,
  • ext_hashing_twox_256_version_1,
  • ext_hashing_twox_128_version_1,
  • ext_hashing_twox_64_version_1,
  • ext_allocator_malloc_version_1,
  • ext_allocator_free_version_1
  • ext_storage_set_version_1,
  • ext_storage_get_version_1,
  • ext_storage_clear_version_1,
  • ext_storage_exists_version_1
  • ext_storage_read_version_1
  • ext_storage_clear_prefix_version_1
  • ext_storage_root_version_1,
  • ext_storage_next_key_version_1
  • ext_crypto_ed25519_generate_version_1,
  • ext_crypto_sr25519_generate_version_1
  • ext_crypto_ed25519_public_keys_version_1,
  • ext_crypto_sr25519_public_keys_version_1
  • ext_crypto_ed25519_sign_version_1,
  • ext_crypto_ed25519_verify_version_1,
  • ext_crypto_sr25519_sign_version_1,
  • ext_crypto_sr25519_verify_version_1
  • ext_trie_blake2_256_root_version_1
  • ext_trie_blake2_256_ordered_root_version_1

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-43-1

Run kagome from existing database

Currenly Kagome cannot be executed from the existing database. We need to fix that.

Possible way to implement the fix is the following (it is okay to come up with different solution):

We should always store the hash of the latest finalized block

If provided leveldb exists then the following changes to trie db should be done (in application injector)

  1. Get merkle root of the latest finalized block
  2. Create trie db from obtained merkle root

BlockStorage initialization

  1. Add static fabric method create(storage::BufferStorage storage), which does the same as createWithGenesis, but does not create genesis block
  2. In injector check if storage contains merkle root of the latest finalized state (from the latest finalized block). If root does not exist, then create block storage using createWithGenesis method (as implemented now). Otherwise create block storage using create method

Babe changes:

if (first_peer) { NextEpochDescriptor epoch_0_and_1_digest; epoch_0_and_1_digest.randomness = genesis_configuration_->randomness; epoch_0_and_1_digest.authorities = genesis_configuration_->genesis_authorities; epoch_storage_->addEpochDescriptor(0, epoch_0_and_1_digest); epoch_storage_->addEpochDescriptor(1, epoch_0_and_1_digest); }

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-52-28

[DOCS] Clarify Overall Architecture

It i not clear to me what kagome does.

My understanding so far is that:

  • Polkadot runtime, the core-blockchain-and-dapp-system
  • kagome, the C++ interfacing code?

Is there possibly some architectural overview re polkadot and the "language related interfacing code"?

It looks like its more than the usual language-related "wrappers".

EDIT:

Can it be that kagome is more like an C++ implementation of substrate, at least parts of it?

ext_storage_read

Gets the given key from storage, placing the value into a buffer and returning the number of bytes that the

entry in storage has beyond the offset.

E.1.3.1 Version 1 - Prototype

(func $ext_storage_read_version_1 (param $key i64) (param $value_out i64) (param $offset i32) (result i64))

Arguments :

  • key: a pointer-size as defined in Definition E.2 containing the key.
  • value_out: a pointer-size as defined in Definition E.2 containing the buffer to which the value will be written to. This function will never write more then the length of the buffer, even if the value's length is bigger.
  • offset: an i32 integer containing the offset beyond the value should be read from.
  • result: a pointer-size as defined in Definition E.2 indicating the SCALE encoded Option as defined in Definition B.4 containing the number of bytes written into the value_out buffer. Returns None if the entry does not exists.

Aha! Link: https://soramitsucoltd.aha.io/requirements/PRE-43-3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.