Giter Site home page Giter Site logo

regen-network / regen-ledger Goto Github PK

View Code? Open in Web Editor NEW
202.0 26.0 91.0 56.6 MB

:seedling: Blockchain for planetary regeneration

Home Page: https://docs.regen.network

License: Other

Go 75.82% Makefile 1.02% Shell 1.50% Dockerfile 0.06% Gherkin 21.59%
blockchain cosmos-sdk tendermint carbon biodiversity origination marketplace climate-tech credit

regen-ledger's Introduction

Issues

A distributed ledger for ecological assets and data claims







Introduction

Regen Ledger is a blockchain application for ecological assets and data claims built on top of Cosmos SDK and Tendermint Core. Leveraging these tools, Regen Ledger provides the infrastructure for a Proof-of-Stake blockchain network governed by a community dedicated to planetary regeneration.

Features specific to Regen Ledger are developed within this repository as custom modules that are then wired up to the main application. The custom modules developed within Regen Ledger follow the same architecture and pattern as modules developed within Cosmos SDK and other Cosmos SDK applications.

The core features that Regen Ledger aims to provide include the following:

  • infrastructure for managing the issuance and retirement of ecosystem service credits
  • a database of ecological state and change of state claims that spans both on and off-chain data sources
  • mechanisms for automating the assessment of ecological state, making payments, and issuing assets

Regen Ledger is under heavy development and as result the above features are implemented to varying degrees of completeness. For more information about our approach and vision, see Regen Ledger Specification.

Documentation

Documentation for Regen Ledger is hosted at docs.regen.network. This includes installation instructions for users and developers, information about live networks running Regen Ledger, instructions on how to interact with local and live networks, infrastructure and module-specific documentation, tutorials for users and developers, migration guides for developers, upgrade guides for validators, a complete list of available commands, and more.

Contributing

Contributions are more than welcome and greatly appreciated. All the information you need to get started should be available in Contributing Guidelines. Please take the time to read through the contributing guidelines before opening an issue or pull request. The following prerequisites and commands cover the basics.

Prerequisites

Go Tools

Install go tools:

make tools

Git Hooks

Configure git hooks:

git config core.hooksPath scripts/githooks

Lint and Format

Run linter in all go modules:

make lint

Run linter and attempt to fix errors in all go modules:

make lint-fix

Run formatting in all go modules:

make format

Run linter for all proto files:

make proto-lint

Run linter and attempt to fix errors for all proto files:

make proto-lint-fix

Run formatting for all proto files:

make proto-format

Running Tests

Run all unit and integrations tests:

make test

Manual Testing

Build the regen binary:

make build

View the available commands:

./build/regen help

Related Repositories

Sleeping in the Forest

I thought the earth remembered me,
she took me back so tenderly,
arranging her dark skirts, her pockets
full of lichens and seeds.

I slept as never before, a stone on the river bed,
nothing between me and the white fire of the stars
but my thoughts, and they floated light as moths
among the branches of the perfect trees.

All night I heard the small kingdoms
breathing around me, the insects,
and the birds who do their work in the darkness.

All night I rose and fell, as if in water,
grappling with a luminous doom. By morning
I had vanished at least a dozen times
into something better.

― Mary Oliver

regen-ledger's People

Contributors

aaronc avatar aleem1314 avatar amaury1093 avatar anilcse avatar atheeshp avatar blushi avatar clevinson avatar colin-axner avatar cpolitano avatar cshear avatar cyberbono3 avatar dependabot[bot] avatar dpdanpittman avatar faddat avatar frumioj avatar gotjoshua avatar kaustubhkapatral avatar likhita-809 avatar mergify[bot] avatar omahs avatar paul121 avatar petefarmer avatar rickmanelius avatar robert-zaremba avatar ruhatch avatar ryanchristo avatar sfishel18 avatar stefanvanburen avatar technicallyty avatar wgwz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

regen-ledger's Issues

xrncli q account <addr> textual output doesn't show balance

Describe the bug
When querying account balances with xrncli q account <addr> the default (text) mode fails to show the balance in the account:

To Reproduce
Steps to reproduce the behavior:
Using version 0.4.1:

  1. xrncli q account xrn:140y8m6r7s40mvmz6g5dqrsrfvfkq5m8cu30tnv

Returns:

$ xrncli q account xrn:140y8m6r7s40mvmz6g5dqrsrfvfkq5m8cu30tnv
address: xrn:140y8m6r7s40mvmz6g5dqrsrfvfkq5m8cu30tnv
coins:
- denom: seed
  amount: {}
pubkey:
- 2
- 76
- 145
- 39
- 14
- 136
- 244
- 214
- 118
- 38
- 18
- 226
- 137
- 252
- 241
- 161
- 43
- 7
- 38
- 94
- 202
- 129
- 143
- 35
- 30
- 110
- 11
- 248
- 195
- 168
- 55
- 206
- 215
accountnumber: 1078
sequence: 14

Note:

coins:
- denom: seed
  amount: {}

Expected behavior
The same command with --output=json shows the true balance:

$ xrncli q account xrn:140y8m6r7s40mvmz6g5dqrsrfvfkq5m8cu30tnv --output=json
{"type":"auth/Account","value":{"address":"xrn:140y8m6r7s40mvmz6g5dqrsrfvfkq5m8cu30tnv","coins":[{"denom":"seed","amount":"1000000822"}],"public_key":{"type":"tendermint/PubKeySecp256k1","value":"AkyRJw6I9NZ2JhLiifzxoSsHJl7KgY8jHm4L+MOoN87X"},"account_number":"1078","sequence":"14"}}

Note:

"coins": [{"denom":"seed","amount":"1000000822"}]

Make all fails with missing script

xrnd version :
v0.5.0

When trying to run make all,looks like install golangci-lint.sh is missing in the latest version (v0.5.0) making the make execution to stop

Write CONTRIBUTING.md

  • link to Waffle/Stories on Board, describe methodology - i.e. story mapping, agile/pull methodology
  • link to Gitter channel
  • describe documentation & testing workflows
  • describe working group/design workflows
  • configure Github PR reviewers/CODEOWNERS and issue templates
  • mention "help wanted" and "good first issue" labels and mark some issues as such
  • add code of conduct

Support basic off-chain raw data tracking

The minimum viable functionality for this is to let users store an SHA-256 hash with an associated URL where the data can be retrieved from. SHA-256 was chosen because of its ubiquity and usage in IPFS which may be a common data store used.

We probably want a message like this:

type MsgTrackRawData struct {
  Sha256Hash []byte
  Url string
  Signer sdk.AccAddress
}

AC:

  • tx tags should include the DataAddress which includes a new case for Sha256 hashes and doesn't otherwise include anything about the URL
  • it is not an error to submit multiple URL's for the same hash. We are essentially letting someone make a claim that the data exists at such and such URL. There is nothing wrong with submitting multiple URL's and oracles should deal with 404's or hash errors gracefully behind the scenes
  • it is also not an error for Url to be nil because it should be possible to track data (i.e. timestamp) it without providing an URL
  • there should be associated QueryStore method (not a custom/ route) to get a list of URL's registered for the hash. This involves exposing a function for generating the store key for these URL's from the DataAddress
  • there should be an xrncli tx data track-raw command that takes a hash and an optional URL
  • there should be an xrncli query data get <bech-address> command and /data/<bech-address> REST route that work for any DataAddress and return the relevant info
  • we should check that the length of the hash if valid for sha256 and reject it if not

Create merkle-tree based RDF canonicalization spec

Context

Most commonly used data formats like JSON or protobuf allow multiple binary representations of the same data. For instance JSON or protobuf fields can occur in any order and JSON allows any amount of white space.

Canonicalization refers to a process by which a single binary representation is created for any semantically equivalent document.

URDNA2015 defines such a canonicalization algorithm for the RDF data model.

Proposal

My design is to do something both simpler and more powerful than URDNA2015 using this approach:

  • start with an acyclic RDF graph (or dataset) consisting only of blank nodes
  • replace blank nodes with IRI's by depth-first traversal using this algorithm:
    • create an empty IAVL merkle tree (or set)
    • take the list of triples (or quads) with this node as its subject
    • map each triple onto a string which is the canonical string serialization of that triples predicate and object concatenated together
    • sort this list
    • insert each item in the list as a key in the IAVL merkle tree (with an empty value)
    • the resulting universally unique IRI for this node is xrn:g/<hash> where <hash> is the hash of this merkle tree (will blake2b 256 hash algorithm work?)

What this approach allows is:

  • universally unique, content addressable nodes
  • graphs where a subset of the triples can be revealed and their membership in the graph can be proved via a merkle proof

Consequences

References

Create claim module

This is to replace or at least complement the current esp module with something more generic, offloading the responsibility of whether a claim is valid to other logic.

Use Cases and Rationale:
The claims model is a generic way to represent who said what and why. The what is contained in the data graph - the claim itself, and who is managed by the claims module which allows many entities to assert the same claim. Many times claims are going to be signed by trusted third parties. So what creates the audit trail if that is not contained in the claim? This is where evidence fits in - oftentimes there might be off-chain data that is even just scanned paper documents which represents what the claim means or somebody has done thorough investigation and produced a report. These can all be linked to a claim via evidence which is the "why the claim is true" and the claim signature effectively who said what and why.

Claims themselves will now to be stored in the data module which will eventually abstract "graph descriptors" that include what the graph schema and subjects (i.e GeoAddress's or sdk.AccAddress's. The claim module just includes a very basic apparatus for signing claims.

A claim signature includes three basic pieces of data:

  • the claim itself (a DataAddress)
  • the signer(s) ([]sdk.AccAddress)
  • any supporting evidence ([]DataAddress) - on or off chain graphs, and importantly off-chain raw data which could be photographs, PDFs, etc.
  • event/claim time - the time which the ecological claim refers to e.g. when a farmer received organic certification.
  • blockchain time - the time the claim signature was recorded on (and off) chain

One important piece of behavior:
Given an existing claim that is already signed
When somebody tries to sign the same claim
Then new evidence and/or signatures will be recorded if any

Fix Regen Ledger lint errors and add `golint` CI task

We want to be able to run golint ./... in CI. This mostly checks that all modules have appropriate doc strings. So this issue is mostly about adding missing source code documentation.

Definition of Done:

  • you can run golint ./... from the project root and there are no errors
  • there is a new CI stage that runs golint ./.... We should probably do this with https://github.com/golangci/golangci-lint instead of golint directly

ESP version CLI query support

When a user types in xrncli query esp version <curator-id> <name> <version>
Then they should get the ESPSpec back if such a version exists

Research: Map out Rhythm

Acceptance Criteria

  • image that communicates our trajectory—where we are today and where we're headed
  • list of activities we can draw from
  • share out more widely
  • include pov of open source, cross domains, multiple teams, commitments etc

MURAL

ESP result CLI query support

Given an ESP result ID
When a user types in xrncli esp query result <result-id>
Then they should get back the ESP result data

On-chain graph data CLI and REST query method

For xrncli query data get and /data/<bech32-url-encoded>

Blocked by #21 (needs an off-chain SchemaResolver)

For the scope of this ticket it should be possible to retrieve data in binary format at a minimum. It would also be desirable to return it in JSON-LD with proper support for framing but that will be addressed in another ticket.

Property schemas

As an ESP author I want to be able to have some schema for my ESP claims so that all data that is submitted as a claim conforms to some format.

This issue is intended to define some minimum level of schema support by supporting RDF property schemas only.

  • it should be possible to define the required type for property. Initially we will only support data properties of type string, double, integer (bigint) and bool
  • it should be possible to define the arity of a property as either one or many
  • properties URL's are formed by the sdk.AccAddress of the creator plus a string identifier, their resulting URL will be of the form xrn:xxxxxxxxx/s/<property-name> where xrn:xxxxxxxxx is the normal bech32 encoding of an sdk.AccAddress
  • internally all properties will receive a PropertyID which is an auto-incremented integer that can be used by an efficient binary serialization format to reference properties (#18)
  • wrote tests
  • wrote docs

add version info to xrnd

Currently 'xrnd version' only outputs the go version. The other fields are blank and need to be populated automatically.
Also, it would be nice to see the version in the Usage message.

Research Story: Walkers Pilot, Ecocacao Schema Draft

Definition of Done

  • support for the Walker's & Ecocacao data schema
  • work with Gisel
    • acquire walker's data
    • understand which elements of the schema are universal (may apply to other esp's)
  • walker data stored and accessible to all (possibly a Github repo or pushed to an existing repo)
  • requirements and architecture document
    • review and understand what changes to schema module are anticipated
    • class primatives

Questions

  • confirm with Gregory, that Walkers data can be public? (e.g. store on public testnet, in our app, where are boundaries such as public info)
  • provide a link to sentinel / planet imagery in claim evidence?
  • will map viewed allow navigating link above?

Clarification here: https://docs.google.com/document/d/1dEjc7jnO2MgsuMaz4lSpj0VxNXtLuYk3KVX36Qu2ZII

Improvements to upgrade module

Motivation: cosmos sometimes pushes breaking changes to the genesis config that would force us to recreate our testnet. We shouldn't be lazy and tear down our testnet every time something like this happens and we actually need a more graceful way to deal with it in production.

  • the upgrade keeper should allow for some "on upgrade" code to be called whenever a chain restarts after a scheduled upgrade. This upgrade code can perform state changes necessary to support the upgrade before any other transactions are processed (so it should happen in the BeginBlocker)
  • allow upgrades based on time or block height
  • make the module as simple as possible but not simpler
  • thorough docs, run golintand make sure there are tutorial docs
  • thorough test coverage
  • adapt Postgres indexer package to properly support migrations and integrate them with upgrade module for testnet
  • add cli query and REST support, and possibly a cli tx cmd helper

`make lint` fails

Describe the bug

I tried running make all and go this issue

Expected behavior

I will propose a PR in the future to fix linter errors

Setup integration test framework and write a simple integration test

  • Using https://github.com/cosmos/cosmos-sdk/tree/develop/cmd/gaia/cli_test as an example create a cmd/integration_test folder for Regen Ledger, pull out all the gaia specific stuff and get it setup for xrnd, xrncli, and a Postgres index
  • Create a make integration_test command
  • Create two accounts with some xrn
  • Test sending money from one account to another using xrncli and verify that the balances are correct
  • Verify that the send transaction and its block are indexed correctly in Postgres
  • Test creating and retrieving a geo shape with xrncli
  • Verify that the created shape can also be retrieved from the geo table in Postgres
  • Add an integration test stage to GitLab CI with a Postgres docker service
  • Update the README to reflect any difference from how Cosmos does things

Indexing of on-chain graphs to Postgresql

This is to support the generation of maps for visualizing ecological state claims.

On-chain graph data should be indexed to Postgresql in the data table which should look something like this:

CREATE TABLE data (
  address bytea NOT NULL PRIMARY KEY,
  data jsonb, -- for on-chain graphs
);

The graph should be serialized as JSON-LD in expanded document form.

Refine README and linked docs

The README should provide a clear description of what Regen Ledger does and point interested people to the right documents. These linked documents should provide clear detail on the current state of the software and indicate clearly the points at which people can interact with the software and/or the project.

Change PowerReduction for testnet

Currently it is set to 1000000 which causes 1000000tree tokens to actually have a voting power of 1. We want to change this temporarily for regen-test-1001 and then set it to a higher value and simply issue more tokens for the next testnet and mainnet.

Mainnet Launch Readiness

These are the minimum things required for launch, they are not the minimum desired features (as we would like to launch with much more) but the minimum required features.

These will allow us to:

  • have an upgradeable blockchain that doesn't need to dump state/restart
  • support community staking DAOs

The logic behind this is that we can launch sooner with a minimal feature set if we have an upgradeable chain, and then slowly add features as they are ready rather than delaying until we have everything that we want.

Non-destructive blockchain upgrades

Acceptance Criteria

  • governance can coordinate a smooth chain upgrade at a specified block height without requiring chain dumps
  • upgrade daemon for smoothly handling upgrades
  • smooth upgrades can handle root multi store renames/deletions
  • support for recovering from upgrades that need to be aborted
  • backwards compatible upgrades for Tendermint (discussed with Ethan Buchman in Berlin and sounded feasible)

Spec and basic types for minimum viable oracle module

The goal of the oracle module is to implement consensus protocols via which we can verify a claim that the computation F with input A has result B.

  • Should specify both the minimum viable on-chain and off-chain data structures for computation results
  • Should specify the mechanics of the minimum viable oracle consensus protocol(s). We may specify one called "fiat" which simply says that an oracle with address X can verify any computations it likes
  • It would be good to expand on a more complete oracle consensus protocol and reach an initial draft, but it is okay to mark it as WIP

Postgrest & Postgraphile servers for Postgresql blockchain index

This is a devops task primarily and would involve configuring a Postgrest and Postgraphile instance in module.nix for the Postgresql blockchain index. These should be enabled whenever the index is enabled and only deal with authenticating a public, read-only user - because the whole index is available for anyone to query and is read-only. Should be pretty straightforward, no need to deal with any of the JWT authentication stuff for these services.

Also worth considering an integration of something like https://github.com/go-spatial/tegola or https://github.com/urbica/martin for generation of Mapbox Vector Tile from Postgis, or tracking that for later

Property schema CLI tx method

It should be possible to define a property schema using the command xrncli tx schema define-property <name> <arity> <type> --from <creator>

Efficient binary RDF format

In order to store on-chain RDF data, it makes most sense to have an efficient binary format for storing that data that relates to the schema module (which defines global schemas for RDF data). This format will:

  • enable efficient verification that the data conforms to the global RDF schema
  • enable efficient verification of the graph hash
  • save storage space on-chain

Should:

  • implement the format only for string node names, and data properties for properties that have been registered in the schema (referencing their PropertyID from the schema module #17)
  • serializer should write out nodes and properties in normalized form (i.e. alphabetical, no blank nodes), return a "normalized" graph instance to the caller, and return the graph hash
  • deserializer should verify graph has been serialized in normalized form and return the computed graph hash
  • write thorough tests, including generative tests
  • write thorough docs including grammar of format
  • add CHANGELOG entry

DEV NOTES:
the grammar for the data should be roughly as follows:

File = FileVersion Node*
FileVersion = <varint encoding of file format version>
Node = NodeID Property*
NodeID = 0x0 <node-name-string>
Property = PropertyID PropertyValue
PropertyID = 0x0 <integer property id from schema module>
PropertyValue = <binary encoding of property value based on schema type>
  • a special "un-named" root node is allowed in every graph
  • the classes for a node (currently unsupported) should be serialized at the start of every node

Store on-chain data graph

Feature: Store on-chain data graph
As an Regen Ledger user
I want to be able to store an RDF data graph
Because I want this data indexed in the global claim database

Given an RDF data graph
When I submit a valid MsgStoreData transaction
Then I should get back the URL for the on-chain data graph

  • wrote docs
  • wrote tests

DEFINITIONS and RATIONALE:
RDF data graph - a format for data graphs that allows indexing in a data store which can use the Sparql query language to query over it. This is advantageous because Sparql has the right features to make it usable as a language for contracts - flexible query expressions over heterogenous data, logic-programming based, and the ability to query external data stores in a single query (Sparql SERVICE)
Global claim database - Regen Ledger provides a global claim database. The data for a claim is represented as an RDF data graph (see W3C Verifiable Claims for some prior work this design is based on). So these on-chain data graphs form the based of the global claim database. The database would also include signatures/verifications of claims but that is covered in other modules

DEV NOTES:

  • MsgStoreData should include the both the data in n-triples format and the hash of the data. The hash should be similar to URDNA2015, but for performance we'll use Blake2b-256 and make blank nodes illegal so we just need to verify that the triples are alphabetized.
  • The data should be identified internally by a data.Address which is the hash of the data plus a one-byte prefix for the type of hash/data storage
  • The URL should be the bech32 encoding of the data.Address with the prefix xrn:d/
  • The goal is to index in a Sparql data store but that will be covered in another issue

Submit on-chain graph data cli support

This is the ability to store graph data on-chain via the cli. There should be a command xrncli tx data store that takes JSON-LD as input and validates and stores it on the blockchain.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.