Giter Site home page Giter Site logo

substreams-antelope's Introduction

github crates.io docs.rs GitHub Workflow Status

This library contains the generated protobuffer for the Antelope blocks as well as helper methods to extract and parse block data.

๐Ÿ“– Documentation

Further resources

Install

$ cargo add substreams-antelope

Usage

Refer to Docs.rs for helper methods on Block that extract action and transaction iterators from the Antelope block.

Cargo.toml

[dependencies]
substreams = "0.5"
substreams-antelope = "0.4"

src/lib.rs

use substreams::prelude::*;
use substreams::errors::Error;
use substreams_antelope::{Block, ActionTraces};

#[substreams::handlers::map]
fn map_action_traces(block: Block) -> Result<ActionTraces, Error> {
    let mut action_traces = vec![];

    for trx in block.transaction_traces() {
        for trace in trx.action_traces {
            action_traces.push(trace);
        }
    }
    Ok(ActionTraces { action_traces })
}

Or, using actions() helper method to filter all actions of Statelog type from myaccount account. As a parameter you can specify a list of contract account names to include actions from, that can be empty if you want actions with this signature from any contract account.

src/lib.rs

#[substreams::handlers::map]
fn map_actions(param_account: String, block: substreams_antelope::Block) -> Result<Actions, substreams::errors::Error> {
    Ok(Actions {
        transfers: block.actions::<abi::contract::actions::Transfer>(&["eosio.token"])
            .map(|(action, trace)| Transfer {
                // set action fields
            })
            .collect(),
    })
}

Using Abigen

To generate ABI bindings for your smart contract you can add abi/contract.abi.json file containing the smart contract ABI, as well as the following build.rs file to the root of your project. This will ensure that src/abi/contract.rs module containing Rust bindings for your smart contract is always generated in your project:

build.rs

fn main() {
    substreams_antelope::Abigen::new("Contract", "abi/gems.blend.abi.json")
        .expect("failed to load abi")
        .generate()
        .expect("failed to generate contract")
        .write_to_file("src/abi/gems.blend.abi.rs")
        .expect("failed to write contract");
}

substreams-antelope's People

Contributors

deniscarriere avatar fschoell avatar yaroshkvorets avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

substreams-antelope's Issues

Move documentation to dedicated repo

Prerequisites

Before starting to work with Substreams, you need to have Rust and the Substreams CLI installed. To create new substreams you also need to have buf available (to generate Rust code from you proto buffers). In case you want to utilize different sinks you might also need to have Go installed.

Rust

Install rust via curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh. This will install the rustup toolchain into ~/.cargo including tools like cargo (package manager). Run rustup update to keep your rust dependencies up to date.

To have this properly included into your path directory, add source $HOME/.cargo/env to your shell profile (for example to ~/.zshrc if you are using zsh).

See the Rust website for further information about the installation.

Substreams CLI

To be able to execute Substreams you need to install the Substreams CLI. MacOS users can just do this using Homebrew by executing brew install streamingfast/tap/substreams.

Other users can get the executable from the Substreams Github repository:

# Use correct binary for your platform
LINK=$(curl -s https://api.github.com/repos/streamingfast/substreams/releases/latest | awk '/download.url.*linux/ {print $2}' | sed 's/"//g')
curl -L  $LINK  | tar zxf -

Check the substreams documentation for more information about installing the CLI.

buf

To generate Rust code from protobuffers you need to install buf (this is not required if you only want to build and run the available Substreams in this repository). Again MacOS users can install those using Homebrew by executing brew install bufbuild/buf/buf.

Otherwise, you can get the binaries from the Github repository:

# Substitute BIN for your bin directory.
# Substitute VERSION for the current released version.
BIN="/usr/local/bin" && \
VERSION="1.9.0" && \
  curl -sSL \
      "https://github.com/bufbuild/buf/releases/download/v${VERSION}/buf-$(uname -s)-$(uname -m)" \
      -o "${BIN}/buf" && \
  chmod +x "${BIN}/buf"

For more information about the installation check the buf website.

Go

To run the available sinks from the Streamingfast team you currently need to have Go installed (until they release) binaries.

MacOS users can again use Homebrew to install Go running brew install go. Linux and Windows users should follow the official installation.

Make sure you have proper PATH variables set in your shell profile (for example .zshrc for zsh users):

export GOPATH=$HOME/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOPATH
export PATH=$PATH:$GOBIN

Building Substreams

To build any of the available Substreams in the ./substreams directory you can use the Makefile by running make build SUBSTREAM=<substream>. Alternatively you can change into the Substream directory and run cargo build --target wasm32-unknown-unknown --release.

You can also execute the ./build-all.sh script in case you want to build all available substreams.

Running Substreams

To execute a Substream on the server you need to use the CLI and specify the substream.yaml file you want to execute, the endpoint to execute the Substream on and the store method.

For example executing the exemplary blocktime-meta Substream you need to run:

substreams run -e waxtest.firehose.eosnation.io:9001 ./substreams/blocktime-meta/substreams.yaml store_blockmeta

In case you want to execute the Substream on a specific interval you can specify the start block using -s <start_block_num> and end block using -t <end_block_num>. The endblock can also be specified as range using +. That means using -t +1000 will run the substreams for 1000 blocks from the start block.

For a current list of available endpoints see here.

Packing Substreams

In case you want to run a Substream on a sink you need to pack your substream. This will create a *.spkg bundle file.

This can be easily done by running make pack SUBSTREAM=<mysubstream>.

Running consumers & sinks

TODO describe how to run substreams from a node/go process and how to run the available sinks (file, MongoDB, graph-node)

Running sinks

There are sinks available from the Streamingfast team, currently those are the file, PostgresSQL and MongoDB sink.

File sink

Install the sink by running go install github.com/streamingfast/substreams-sink-files/cmd/substreams-sink-files@latest.

Writing consumers

TODO describe how to write custom consumers for substreams in different languages.

Creating new Substreams

This chapter will give you a brief understanding what to do if you want to create a new substream.

Setup the codebase

To create a new Substream you need to first create a new code base:

cargo new substreams/<mysubstream> --lib

Next you want to make sure that we will be able to compile the substreams and generate the relevant models for them. For that you'll need a substreams.yaml file. Copy this over from another substream and adapt it to your needs.

Create the models

You probably want to start now by defining the output models for your maps so you can write the transformation from a full antelope Block to the data you actually need within your substreams. These are written as protobuffers and should be located within the proto folder in your substreams directory. See this protofile as an example.

After you have defined your models you want to generate the Rust models from your protobuffers. You can do this by executing make codegen SUBSTREAM=<mysubstream>. This will generate the Rust code into the src/pb folder in your Substream module. You need to now add a mod.rs in that folder to export the code, see here for an example.

Write the transformers and stores

To write the actual code you first want to create your maps. Maps will transform an input format into some output format. The first map will receive the full antelope block format (including all block headers and all transactions). Its job is it to filter out the relevant data and output it into one of the custom models you defined a step above.

The stores will then receive the outputs from the maps and store them into one of the predefined KV stores. Those are providing different update policies (such as set or add for example) for different data types. So to add up int64 values on a specific key you would use a StoreAddInt64. The available stores can be found here.

A simple example on how to map and store blocks can be found here. And more information about developing Substreams can be found on the official documentation.

Note: Make sure the input / output data types you use in your maps and stores match the ones you have defined in your substreams.yaml file.

Contributing

TODO

Add example consumers and sinks

  • add a consumer example to retrieve data from the blocktime-meta substream
  • add an example sink for the above substream
  • add documentation on how to write custom consumers
  • add documentation how to use the existing sinks
    • flat file
    • MongoDB
    • graph-node

`Block::executed_transaction_traces()` fails on transactions without receipts

โฏ substreams run -e eos.substreams.pinax.network:443 https://github.com/pinax-network/substreams/releases/download/eosio.token-v0.13.1/eosio-token-v0.13.1.spkg map_transfers -s 35080000 -t +1
Connected (trace ID 88291f8f3327711e5f4c3122a7f6d8f6)
Progress messages received: 0 (0/sec)
Backprocessing history up to requested target block 35080000:
(hit 'm' to switch mode)


Error: rpc error: code = InvalidArgument desc = step new irr: handler step new: execute modules: applying executor results "map_transfers": execute: maps wasm call: block 35080000: module "map_transfers": general wasm execution panicked: wasm execution failed deterministically: panic in the wasm: "called `Option::unwrap()` on a `None` value" at /Users/shkvo/.cargo/registry/src/github.com-1ecc6299db9ec823/substreams-antelope-core-0.3.2/src/block.rs:36:48

Looks like EOS system contract was out of resources for ~6000 blocks after block 35080000 and eosio::onblock was failing without receipts.

executed_transaction_traces() panics because of that.

See trx: 2e4a56fd1b06dd42c67c8b6264099d9921b627f6d4dfe6aab30f89f8f5dc22b0

Solution: skip such transactions in executed_transaction_traces() but still include them in all_transaction_traces()

Rust Abigen for Antelope

Generates contract structres and decoding helper functions.
Usage:

Abigen::new("Contract", "abi/contract.abi.json")?
        .generate()?
        .write_to_file("src/abi/contract.rs")?;

Similar to what antelope-abi2rs is doing but need to re-think contract structs and methods

Add filter to `actions()`

for the block.actions it shows all receiver actions (for token transfers that's 3x)

Well filtering the receiver seems like a very common thing, almost every single time

Seems like by default it should filter by receiver==contract and can be disabled as opt-in

Almost 100% everyone will hit this, expecting one action, but actually getting 3x and then trying to figure out how to filter it out afterwards

`clone()` on full block/transaction_trace should be avoided

The main example show cases:

for trx in block.clone().all_transaction_traces() {
        for trace in trx.action_traces.clone() {
            action_traces.push(trace);
        }
    }

When the block.clone() will perform a deep clone of the full block in memory which is highly undesirable and for all of the transaction trace. Where later each action trace is itself cloned again.

Pure view function should be used or into_iter() can be used to "move" transaction and action out of the array moving them instead of copying of referencing.

I did not had the chance to review the Rust code, but I'll happily guide/help anyone who is doing it.

Add convenience filtering methods to `Block`

Lowish priority.

Similarly to events() in substreams_ethereum, would be good to have corresponding methods for Antelope, so this is possible:

  • block.action_calls::<abi::token::actions::Transfer>("eosio.token") - produces iterator with decoded Transfer struct on all eosio.token::transfer calls in the block

  • block.table_changes::<abi::token::tables::Account>("eosio.token", "accounts") - produces iterator with decoded before/after Account structs on all eosio.token::accounts table changes in the block

Of course, the generated ABI module (abi::token in this case) should implement decode() methods on those structs for that to work

Split into crates

See substreams-ethereum:

  • substreams-antelope-core
  • substreams-antelope-abigen
  • substreams-antelope
  • substreams-antelope-abigen-tests

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.