Giter Site home page Giter Site logo

near-workspaces-rs's Introduction

NEAR Workspaces (Rust Edition)

Rust library for automating workflows and writing tests for NEAR smart contracts. This software is not final, and will likely change.

Crates.io version Download Reference Documentation

Release notes

Release notes and unreleased changes can be found in the CHANGELOG

Requirements

  • Rust v1.69.0 and up.
  • MacOS (x86 and M1) or Linux (x86) for sandbox tests.

WASM compilation not supported

near-workspaces-rs, the library itself, does not currently compile to WASM. Best to put this dependency in [dev-dependencies] section of Cargo.toml if we were trying to run this library alongside something that already does compile to WASM, such as near-sdk-rs.

Simple Testing Case

A simple test to get us going and familiar with near-workspaces framework. Here, we will be going through the NFT contract and how we can test it with near-workspaces-rs.

Setup -- Imports

First, we need to declare some imports for convenience.

// macro allowing us to convert args into JSON bytes to be read by the contract.
use serde_json::json;

We will need to have our pre-compiled WASM contract ahead of time and know its path. Refer to the respective near-sdk-{rs, js} repos/language for where these paths are located.

In this showcase, we will be pointing to the example's NFT contract:

const NFT_WASM_FILEPATH: &str = "./examples/res/non_fungible_token.wasm";

NOTE: there is an unstable feature that will allow us to compile our projects during testing time as well. Take a look at the feature section Compiling Contracts During Test Time

Setup -- Setting up Sandbox and Deploying NFT Contract

This includes launching our sandbox, loading our wasm file and deploying that wasm file to the sandbox environment.

#[tokio::test]
async fn test_nft_contract() -> anyhow::Result<()> {
    let worker = near_workspaces::sandbox().await?;
    let wasm = std::fs::read(NFT_WASM_FILEPATH)?;
    let contract = worker.dev_deploy(&wasm).await?;

Where

  • anyhow - A crate that deals with error handling, making it more robust for developers.
  • worker - Our gateway towards interacting with our sandbox environment.
  • contract- The deployed contract on sandbox the developer interacts with.

Initialize Contract & Test Output

Then we'll go directly into making a call into the contract, and initialize the NFT contract's metadata:

    let outcome = contract
        .call("new_default_meta")
        .args_json(json!({
            "owner_id": contract.id(),
        }))
        .transact()  // note: we use the contract's keys here to sign the transaction
        .await?;

    // outcome contains data like logs, receipts and transaction outcomes.
    println!("new_default_meta outcome: {:#?}", outcome);

Afterwards, let's mint an NFT via nft_mint. This showcases some extra arguments we can supply, such as deposit and gas:

    use near_gas::NearGas;
    use near_workspaces::types::NearToken;

    let deposit = NearToken::from_near(100);
    let outcome = contract
        .call("nft_mint")
        .args_json(json!({
            "token_id": "0",
            "token_owner_id": contract.id(),
            "token_metadata": {
                "title": "Olympus Mons",
                "description": "Tallest mountain in charted solar system",
                "copies": 1,
            },
        }))
        .deposit(deposit)
        // nft_mint might consume more than default gas, so supply our own gas value:
        .gas(NearGas::from_tgas(300))
        .transact()
        .await?;

    println!("nft_mint outcome: {:#?}", outcome);

Then later on, we can view our minted NFT's metadata via our view call into nft_metadata:

    let result: serde_json::Value = contract
        .call("nft_metadata")
        .view()
        .await?
        .json()?;

    println!("--------------\n{}", result);
    println!("Dev Account ID: {}", contract.id());
    Ok(())
}

Updating Contract Afterwards

Note that if our contract code changes, near-workspaces-rs does nothing about it since we are utilizing deploy/dev_deploy to merely send the contract bytes to the network. So if it does change, we will have to recompile the contract as usual, and point deploy/dev_deploy again to the right WASM files. However, there is a feature that will recompile contract changes for us: refer to the experimental/unstable compile_project function for telling near-workspaces to compile a Rust project for us.

Examples

More standalone examples can be found in examples/src/*.rs.

To run the above NFT example, execute:

cargo run --example nft

Features

Choosing a network

#[tokio::main]  // or whatever runtime we want
async fn main() -> anyhow::Result<()> {
    // Create a sandboxed environment.
    // NOTE: Each call will create a new sandboxed environment
    let worker = near_workspaces::sandbox().await?;
    // or for testnet:
    let worker = near_workspaces::testnet().await?;
}

Helper Functions

Need to make a helper functions utilizing contracts? Just import it and pass it around:

use near_workspaces::Contract;

// Helper function that calls into a contract we give it
async fn call_my_func(contract: &Contract) -> anyhow::Result<()> {
    // Call into the function `contract_function` with args:
    contract.call("contract_function")
        .args_json(serde_json::json!({
            "message": msg,
        })
        .transact()
        .await?;
    Ok(())
}

Or to pass around workers regardless of networks:

use near_workspaces::{DevNetwork, Worker};

const CONTRACT_BYTES: &[u8] = include_bytes!("./relative/path/to/file.wasm");

// Create a helper function that deploys a specific contract
// NOTE: `dev_deploy` is only available on `DevNetwork`s such as sandbox and testnet.
async fn deploy_my_contract(worker: Worker<impl DevNetwork>) -> anyhow::Result<Contract> {
    worker.dev_deploy(CONTRACT_BYTES).await
}

View Account Details

We can check the balance of our accounts like so:

#[test(tokio::test)]
async fn test_contract_transfer() -> anyhow::Result<()> {
    let transfer_amount = NearToken::from_millinear(100);
    let worker = near_workspaces::sandbox().await?;

    let contract = worker
        .dev_deploy(include_bytes!("../target/res/your_project_name.wasm"))
        .await?;
    contract.call("new")
        .max_gas()
        .transact()
        .await?;

    let alice = worker.dev_create_account().await?;
    let bob = worker.dev_create_account().await?;
    let bob_original_balance = bob.view_account().await?.balance;

    alice.call(contract.id(), "function_that_transfers")
        .args_json(json!({ "destination_account": bob.id() }))
        .max_gas()
        .deposit(transfer_amount)
        .transact()
        .await?;
    assert_eq!(
        bob.view_account().await?.balance,
        bob_original_balance + transfer_amount
    );

    Ok(())
}

For viewing other chain related details, look at the docs for Worker, Account and Contract

Spooning - Pulling Existing State and Contracts from Mainnet/Testnet

This example will showcase spooning state from a testnet contract into our local sandbox environment.

We will first start with the usual imports:

use near_workspaces::network::Sandbox;
use near_workspaces::{Account, AccountId, BlockHeight, Contract, Worker};

Then specify the contract name from testnet we want to be pulling:

const CONTRACT_ACCOUNT: &str = "contract_account_name_on_testnet.testnet";

Let's also specify a specific block ID referencing back to a specific time. Just in case our contract or the one we're referencing has been changed or updated:

const BLOCK_HEIGHT: BlockHeight = 12345;

Create a function called pull_contract which will pull the contract's .wasm file from the chain and deploy it onto our local sandbox. We'll have to re-initialize it with all the data to run tests.

async fn pull_contract(owner: &Account, worker: &Worker<Sandbox>) -> anyhow::Result<Contract> {
    let testnet = near_workspaces::testnet_archival().await?;
    let contract_id: AccountId = CONTRACT_ACCOUNT.parse()?;

This next line will actually pull down the relevant contract from testnet and set an initial balance on it with 1000 NEAR.

Following that we will have to init the contract again with our own metadata. This is because the contract's data is to big for the RPC service to pull down, who's limits are set to 50kb.

    use near_workspaces::types::NearToken;
    let contract = worker
        .import_contract(&contract_id, &testnet)
        .initial_balance(NearToken::from_near(1000))
        .block_height(BLOCK_HEIGHT)
        .transact()
        .await?;

    owner
        .call(contract.id(), "init_method_name")
        .args_json(serde_json::json!({
            "arg1": value1,
            "arg2": value2,
        }))
        .transact()
        .await?;

    Ok(contract)
}

Time Traveling

workspaces testing offers support for forwarding the state of the blockchain to the future. This means contracts which require time sensitive data do not need to sit and wait the same amount of time for blocks on the sandbox to be produced. We can simply just call worker.fast_forward to get us further in time. Note: This is not to be confused with speeding up the current in-flight transactions; the state being forwarded in this case refers to time-related state (the block height, timestamp and epoch).

#[tokio::test]
async fn test_contract() -> anyhow::Result<()> {
    let worker = near_workspaces::sandbox().await?;
    let contract = worker.dev_deploy(WASM_BYTES).await?;

    let blocks_to_advance = 10000;
    worker.fast_forward(blocks_to_advance).await?;

    // Now, "do_something_with_time" will be in the future and can act on future time-related state.
    contract.call("do_something_with_time")
        .transact()
        .await?;
}

For a full example, take a look at examples/src/fast_forward.rs.

Compiling Contracts During Test Time

Note, this is an unstable feature and will very likely change. To enable it, add the unstable feature flag to workspaces dependency in Cargo.toml:

[dependencies]
near-workspaces = { version = "...", features = ["unstable"] }

Then, in our tests right before we call into deploy or dev_deploy, we can compile our projects:

#[tokio::test]
async fn test_contract() -> anyhow::Result<()> {
    let wasm = near_workspaces::compile_project("path/to/contract-rs-project").await?;

    let worker = workspaces::sandbox().await?;
    let contract = worker.dev_deploy(&wasm).await?;
    ...
}

For a full example, take a look at workspaces/tests/deploy_project.rs.

Other Features

Other features can be directly found in the examples/ folder, with some documentation outlining how they can be used.

Environment Variables

These environment variables will be useful if there was ever a snag hit:

  • NEAR_RPC_TIMEOUT_SECS: The default is 10 seconds, but this is the amount of time before timing out waiting for a RPC service when talking to the sandbox or any other network such as testnet.
  • NEAR_SANDBOX_BIN_PATH: Set this to our own prebuilt neard-sandbox bin path if we want to use a non-default version of the sandbox or configure nearcore with our own custom features that we want to test in near-workspaces.
  • NEAR_SANDBOX_MAX_PAYLOAD_SIZE: Sets the max payload size for sending transaction commits to sandbox. The default is 1gb and is necessary for patching large states.
  • NEAR_SANDBOX_MAX_FILES: Set the max amount of files that can be opened at a time in the sandbox. If none is specified, the default size of 4096 will be used. The actual near chain will use over 10,000 in practice, but for testing this should be much lower since we do not have a constantly running blockchain unless our tests take up that much time.
  • NEAR_RPC_API_KEY: This is the API key necessary for communicating with RPC nodes. This is useful when interacting with services such as Pagoda Console or a service that can access RPC metrics. This is not a hard requirement, but it is recommended to running the Pagoda example in the examples folder.
  • NEAR_ENABLE_SANDBOX_LOG: Set this to 1 to enable sandbox logging. This is useful for debugging issues with the neard-sandbox binary.

near-workspaces-rs's People

Contributors

agostbiro avatar aleksuss avatar austinabell avatar birchmd avatar chaotictempest avatar davidm-d avatar frol avatar ghimire007 avatar iho avatar itegulov avatar itsyaasir avatar joshuajbouw avatar ryancwalsh avatar shariffdev avatar trevorjtclarke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

near-workspaces-rs's Issues

Feature Idea: Utilize workspaces as a pipeline tool

for example:
1. workspaces can be used for testing + deployment to testnet
2. local & testnet can be mocked and deployed to
3. Workspace tests all pass, then re-use some methods for deployment + configure + migration phases
4. Output the report into a snapshot that can be tagged as a release for github (so CI/CD can do release)

For more context, truffle users will be familiar with: [Truffle Migrate](https://trufflesuite.com/docs/truffle/getting-started/running-migrations) – while im not in any way proposing this model, just suggesting as a possible thought pattern

Add API for measuring gas

Currently, we'd comb through the CallExecution outcomes from certain functions that emit this value such as call (but not all function will emit it) to get the used gas. And then later, we add it all up to get the total gas consumed. We can ease this by having some sort of GasMeter that would measure changes between each call, or whenever we want to get the elapsed gas used.

This would potentially look like this:

let meter = GasMeter::now(worker);
...
worker.call(...);
...
println!("Gas used: {} yoctoNear", meter.elapsed().as_yocto());

Errors can cause long block times for retries

I came across a situation where making a .call that has bad args or bad response (.json()? fails for whatever reason), that causes the retries (HERE: let retry_strategy = ExponentialBackoff::from_millis(10).map(jitter).take(5);) to delay the resolution of a call. anyhow makes certain error situations go unnoticed making the developer (me) confused until much more debugging.

I suggest 2 ideas:

  1. make retries configurable, default to 5?
  2. potentially have a universal error catch that reports upon flag if found within .call or .view so developer is alerted to the possibility that it is their code, not the network/sandbox.

Hide sandbox logs in tests by default

The running node logs should probably be hidden by default to not clutter the rust tests, especially since in the case of sandbox there is unlikely a log related to any TX. Would still be nice to be able to show/configure logs with RUST_LOG though.

Opening issue here for now, but I'm not sure if the logs can be suppressed or configured from running the process within the workspace test as it is.

How to call a method on the `contract` and sign it by the `user`?

Hey there! I'm trying to start using this project and I really love it, but I guess I have use-cases not covered yet.

Basically, I have a contract deployed and I need to check the user (different from the owner) can use it properly.

I've managed to find worker.dev_create() function and get the user but I'm stuck with trying to call the contract and not sign it by the owner.

pub async fn call(
        &self,
        contract: &Contract,
        method: String,
        args: Vec<u8>,
        deposit: Option<Balance>,
    ) -> anyhow::Result<CallExecutionDetails> {
        self.client()
            .call(
                contract.signer(),
                contract.id().clone(),
                method,
                args,
                None,
                deposit,
            )
            .await
            .map(Into::into)
    }

the function call takes only contract arg and makes it a callee and a caller which is not always the desired behavior. Was it done intentionally or it should be changed?

I guess I can open a PR if you agree that this should be changed.

Assign me on this issue if you want me to open a PR. Thanks!

Traits or no traits

There are two alternative ways we can design the overall shape API.

One is to make the Worker type to be parametrized by network type. So things like Sandbox and TestNet are different types. The public API would look roughly like this:

 /// Worker holds an actual network, a distinct type
 pub struct Worker<T> { }

/// Methods which can be specific to some network are abstracted away in a
 /// trait.
 #<span class="error">[async_trait]</span>
 pub trait TopLevelAccountCreator 

 { async fn create_tla(&self, id: AccountId, signer: InMemorySigner) -> Result<CallExecution<Account>>; async fn create_tla_and_deploy(&self, id: AccountId, signer: InMemorySigner, wasm: Vec<u8>) -> Result<CallExecution<Contract>>; } 

/// Worker has extra methods via trait if the underlying network supports it,
 /// checked at compile time
 #<span class="error">[async_trait]</span>
 impl<T> TopLevelAccountCreator for Worker<T>
 where
 T: TopLevelAccountCreator + Send + Sync,

 { // Delegates } 

/// Methods available on all of the networks are inherent.
 impl<T> Worker<T>
 where
 T: NetworkClient,
 {
 pub async fn call(&self, contract: &Contract, method: String, args: Vec<u8>, deposit: Option<Balance>) -> Result<CallExecutionDetails> { }
 pub async fn view(&self, contract_id: AccountId, method_name: String, args: FunctionArgs) -> Result<serde_json::Value> { }
 }

pub struct Sandbox { }
 impl NetworkClient for Sandbox {}
 #<span class="error">[async_trait]</span>
 impl TopLevelAccountCreator for Sandbox {}

pub struct Testnet { }
 impl NetworkClient for Sandbox {}
 // NB: this impl is **not** provided.
 // #<span class="error">[async_trait]</span>
 // impl TopLevelAccountCreator for Sandbox {}

The usage would look like this:

 use workspaces::

 {Worker, Sandbox, TopLevelAccountCreator} 

;

fn main() 

 { let w: Worker<Sandbox> = Worker::new(); .... } 

fn generic_usage<T: TopLevelAccountCreator + NetworkClient>(w: Worker<T>) 

 { ... } 

An alternative is to not make type-level distinction between the types of network. The public API would then look like so:

 /// Worker holds a JSON rpc clinet, the same type for all networks.
 pub struct Worker{ }

impl Worker {
 /// What network the worker is for is determined at runtime, rather than by
 /// a type parameter.
 pub fn new_testnet() -> Worker {}
 pub fn new_sandbox() -> Worker {}

 /// It's possible to call any method on any worker, but that will result in
 /// an "unsupported method" error at runtime, if the underlying network
 /// doesn't support the method
 pub async fn create_tla(&self, id: AccountId, signer: InMemorySigner) -> Result<CallExecution<Account>>;
 pub async fn create_tla_and_deploy(&self, id: AccountId, signer: InMemorySigner, wasm: Vec<u8>) -> Result<CallExecution<Contract>>;

 pub async fn call(&self, contract: &Contract, method: String, args: Vec<u8>, deposit: Option<Balance>) -> Result<CallExecutionDetails> { }
 pub async fn view(&self, contract_id: AccountId, method_name: String, args: FunctionArgs) -> Result<serde_json::Value> { }
 }

The usage would look like this:

 use workspaces::Worker;
 fn main() 

 { let w = Worker::new_sandbox(); } 

fn generic_usage(w: Worker) {

}

fn runtime_generic_usage {
 let w = if std::env::args().contains("--testnet") 

 { Worker::new_testnet() } 

 else 

 { Worker::new_sandbox() } 

;
 }

The question is, which one should we choose.

Traits:

Pros:

  • compiler checks that operations "make sense"
  • it's possible (though not super trivial) so see which specific operations differentiate networks

Cons:

  • Significatly larger public API: there are many Rust names the user is exposed to
  • Requires many imports for the user to use
  • Somewhat high-brow language maichnery – the API is generic and has trait bounds, rather than being just inherent methods which are always available
  • Adds public dependency on asyc_trait, which is cool, but is fundamentally a hack.
  • Just overall is significantly more wordy
  • It might be hard to get traits-based API right on the first try, as traits have more langauge rules around them (restrictions regarding dynamic dyspatch, async in traits, coherence, etc).

No Traits:

Pros:

  • as simple an API as one can get – a single type with a bunch of methods
  • no need to import lots of stuff – you can strart with Worked and autocomplete anything to victory
  • if users wants to write the code which can work with several networks, users don't have to make that code generic and figure out whchi trait bounds they need
  • users can make descision about the network at runtime (see runtime_generic_usage)

Cons:

  • user might do something wrong like Worker::new_mainnet().patch_state() and get an error at runtime, rather than at compile time.

Re-deploying a contract to the same account

I could not find a convenient way to do this with the current API. The closest thing is doing this:

let worker = workspaces::sandbox();
let (id, sk) = worker.dev_generate().await;
let contract = worker
    .create_tla_and_deploy(
        id.clone(),
        sk.clone(),
        include_bytes!("../res/legacy_w_near.wasm").to_vec(),
    )
    .await?
    .result;
...
let contract = worker
    .create_tla_and_deploy(
        id.clone(),
        sk,
        include_bytes!("../res/w_near.wasm").to_vec(),
    )
    .await?
    .result;

The problem with this is that SecretKey is a non-public type, so you can only do the pattern above in-place (i.e. without passing sk into other functions).

Allow patch_state to take in multiple patches

Discussed in #6 (comment). This would potentially require exposing some low level APIs for this to work due to not fulfilling the requirements of turning BorshSerialize into a trait objects for accepting multiple patches at the same time.

One potential solution mentioned by @austinabell is to have the patch_state signature take IntoIterator<Item=(Key, Vec<u8>)> then expose potentially higher level APIs with this.

Implement mainnet runtime

This will potentially be deferred out further, since this is not a main priority for now. https://github.com/near/workspaces-js does not have one either, but it would be nice down the road to have stuff working on testnet also be working on mainnet. Previous issue #3 already closed out the testnet portion.

If anyone needs this, feel free to bump this or make a comment.

Allow configuring runtime with environment variable

This will allow us to just specify runner::main and runner::test and have it pick a runtime based on whatever environment variable we choose such as NEAR_RUNNER_NETWORK

This might depend on #3 being implemented first before this gets implemented

Provide spooning example of Testnet contract

The goal of this ticket is to use either an existing sandbox test or a new one to demonstrate "spooning" of a testnet contract written in Rust.

  1. Use sandbox testing to deploy an Rust contract to a local sandbox instance
  2. Take the testnet state of the same smart contract (that is populated)
  3. Use the patch state functionality to apply those changes locally
  4. Write a simple test that demonstrates some expected behavior

Explore using single sandbox instance

Currently, all workspaces::sandbox() calls will spin up a new sandbox instance. This was to prevent non-determinism from multiple tests hitting the same node and potentially spinning up similarly sounding accounts and/or modifying similar data with patch state. But this might not be the best for most use-cases and I think we can be less strict about this.

I think we can switch the defaults over to using a single sandbox, and if users are experiencing this non-determinism, they can switch it back over to multiple sandbox. I'm not sure this is the best default, but this is very open to opinions so comment if there's anything that could be worked out better.

Alternatively, we can have an extra macro for each test that points to the port for the sandbox like #<span class="error">[sandbox::at_port(3030)]</span> or #<span class="error">[sandbox::multi_instance]</span>. Or if we don't want to use macros, we can have it be a separate function like workspaces::sandbox_on(3030).

Dependency issue with Sandbox

Im getting the error:

error: failed to get `near-sandbox-utils` as a dependency of package `workspaces v0.1.0 (/home/near/workspaces-rs/workspaces)`

Caused by:
  failed to load source for dependency `near-sandbox-utils`

Caused by:
  Unable to update https://github.com/near/sandbox

Caused by:
  failed to find branch `master`

Caused by:
  cannot locate remote-tracking branch 'origin/master'; class=Reference (4); code=NotFound (-3)

This seems to be caused by a dependency on the older project "sandbox".

Can this dep get removed soon?

Reproduce

Starting from a clean git clone:

cargo run --package examples --example nft

Do not require `&worker` when executing `{contract, account}.call`

Accounts and Contracts do not need to take in worker when performing calls, but would end up requiring type parameters in the current state of the API. Once Worker no longer takes a type parameter and is resolved with this issue #31, then we can just add a Worker pretty simply into these types.

Figure out an easy way to both compile contracts and run them with workspaces

This might be a tricky one to solve, but ideally, we should have a way that allows the users of the both the sdk and workspaces to compile a contract, and then automatically point it towards workspaces for it to be deployed. We could scrape the local project's Cargo.toml for this info, but there be dragons if it were nested into multiple members.

worker.dev_deploy(workspaces::this_project());

or if multiple contracts

worker.dev_deploy(workspaces::this_project("ref-finance"));

*syntax not final

Cannot create several tests

If you try running several separate tests with a sandboxed environment for the first time, at least one of them fails.

See an example: NinoLipartiia4ire/workspaces-rs-example@33a479d.
Both tests from this example should pass. But if you run them:

git clone https://github.com/NinoLipartiia4ire/workspaces-rs-example
cd workspaces-rs-example/tests-workspaces-rs && cargo test

Either one of them fails:

test second_test ... ok
test first_test ... FAILED

failures:

---- first_test stdout ----
Starting up sandbox at localhost:24842
thread 'first_test' panicked at 'called `Result::unwrap()` on an `Err` value: Directory not empty (os error 66)', /Users/nino/.cargo/registry/src/github.com-1ecc6299db9ec823/workspaces-0.1.0/src/network/sandbox.rs:43:24

Or both tests fail:

test second_test ... FAILED
test first_test ... FAILED

failures:

---- second_test stdout ----
Starting up sandbox at localhost:22402
thread 'second_test' panicked at 'called `Result::unwrap()` on an `Err` value: /Users/nino/projects/issue-workspaces-rs/workspaces-rs-example/tests-workspaces-rs/target/debug/build/near-sandbox-utils-0a71fab0dd1ce1e6/out/near-sandbox-8b3f59a31cb9016b/near-sandbox is not executable', /Users/nino/.cargo/registry/src/github.com-1ecc6299db9ec823/workspaces-0.1.0/src/network/sandbox.rs:43:24
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- first_test stdout ----
Starting up sandbox at localhost:17497
thread 'first_test' panicked at 'called `Result::unwrap()` on an `Err` value: No such file or directory (os error 2)', /Users/nino/.cargo/registry/src/github.com-1ecc6299db9ec823/workspaces-0.1.0/src/network/sandbox.rs:43:24

Running cargo test for the second time and later doesn't return any error, both tests pass.

To reproduce this bug again: cargo clean && cargo test.

Add helper functions

Here is a shortlist of helpful functionality that have been with simulation tests that are worth having for sandbox testing:

  • did tx succeed?
  • what's the error
  • what's the log
  • logs_contain value (suggestion)
  • unwrap json ability
  • unwrap borsh ability

Allow specifying sandbox version in workspaces

near/sandbox#26 was added to sandbox to be able to configure the binary download location version, but it would be beneficial to be able to express this with some API from workspaces to override this without relying on setting an env variable when running tests.

I'll avoid suggesting a specific API since it depends on what happens with the sandbox issue and other issues/changes like #71

This issue is blocked until the direction of the sandbox issue is decided and implemented

Benefit for things like #81 to avoid having to use a specific version of workspaces that matches the sandbox version you want or managing downloading the binary manually

`workspaces` name is too generic

Please, consider renaming the repo and crate name to near-workspaces-rs (I am still confused why it is named as "workspaces" to begin with 🤷‍♂️ )

This feedback was brought by a few people including Illia, and I agree with them that reading workspaces = "..." in Cargo.toml file of a contract that uses it does not give you any relevant hints.

Add unit helper

Within workspaces-js, we have some pretty neat unit conversions helpers such as NEAR.parse('1,000,000,000 N') to supply to function calls. Would be nice to have something like it for the rust version as well.

Support patching account and access-keys

Currently, only patching contract state is supported via #6. This is might also depend on #12 where patching contract state, accounts and access keys can have a common internal function.

Update sandbox & workspaces with the latest once NEP264 function call gas weight is implemented

Blocked by: near/nearcore#6285

TODO:
1. Update the commit hash specified [here](https://github.com/near/sandbox/blob/eaf5676daf53d0958f2b765e8b34dc0840792081/crate/src/lib.rs#L25.)
2. Patch release for sandbox/workspaces with this change. The binaries for sandbox are automatically being uploaded on latest nearcore per commit so there's not much to worry there. https://github.com/near/sandbox/blob/eaf5676daf53d0958f2b765e8b34dc0840792081/crate/src/lib.rs

Windows builds

Ticket to track windows support. Currently, sandbox binaries are only being released for macos (x86) and linux (x86). Would be nice if we have a Windows one as well since near-sdk-rs supports compilation on Windows and it would be consistent if workspaces also worked too

Better support for Error assertion

Sorry if this is a n00b issue. :|

Here is a basic example of what i'd like to do:

// try to unregister agent again, check it fails
    let fail_agent = agent
        .call(&worker, contract.id().clone(), "register_agent")
        .args_json(json!({}))?
        .deposit(parse_near!("0.00226 N"))
        .transact()
        .await
        .context("Failed to re-register")?;
    println!("agent fail_agent {:#?}", fail_agent);
   // i know - this is bad, but you get the idea
   assert!(fail_agent.is_error());

however it seems there are no exported ways that I can check the failure? Here is the response i get:

agent fail CallExecutionDetails {
    status: Failure(ActionError(ActionError { index: Some(0), kind: FunctionCallError(ExecutionError("Smart contract panicked: Agent already exists: Agent { status: Active, payable_account_id: AccountId(\"agent_1.dev-20211221202721-35660262466987\"), balance: U128(2260000000000000000000), total_tasks_executed: U128(0), last_missed_slot: 0 }. Refunding the deposit.")) })),
    total_gas_burnt: 6197905373141,
}

Which is exactly the response i would expect, and am testing to make sure it fails. Is there an example or easy way to peel back the error onion?
Heres the only spot i could find where it highlights another way that gets strange: https://github.com/near/workspaces-rs/blob/main/examples/src/spooning.rs#L116-L129

Allow more generic method calls

Based on #1 (comment), need to evaluate whether some arguments and/or return values in rpc/api.rs can be better suited as other types. This will also be a continuation of #2

  • dev_deploy should return a single object instead of both contract_id and signer

Decide on and switch error handling to something more generally usable

Currently just using string errors for everything, but would probably be best for long-term maintainability if we pick an error type/pattern to use before the API surface gets large and starts to be used.

Options:

  • Concrete types that implement std::error::Error
  • Box<dyn std::error::Error> error types
    • Reduces code internally to avoid having to convert errors from other crates, but can be annoying to use in cases
  • Anyhow Error https://github.com/dtolnay/anyhow
    • probably just a better version of the above, doesn't seem like it adds any overhead to a user other than potentially being a bit confusing to use for new Rust devs, but probably not much more than String errors
  • Keep String error on public APIs, use one of the above internally for convenience

I personally think using something like anyhow makes sense for this kind of API that is just used for testing, but let me know if you guys have thoughts. Concrete error types could also be an option but this might become tedious without any real benefit

How do you less verbose output

Currently when running tests I get

Jan 25 15:44:51.390  INFO neard: Version: trunk, Build: 2c9375e, Latest Protocol: 48
Jan 25 15:44:51.395  INFO near: Generated node key, validator key, genesis file in /var/folders/vp/8bpjzhtd71s6vyy6382ylq440000gn/T/sandbox-22709
Started sandbox: pid=3954
Jan 25 15:44:51.405  INFO neard: Version: trunk, Build: 2c9375e, Latest Protocol: 48
Jan 25 15:44:51.411  INFO near: Did not find "/var/folders/vp/8bpjzhtd71s6vyy6382ylq440000gn/T/sandbox-22709/data" path, will be creating new store database
Jan 25 15:44:51.929  INFO stats: Server listening at ed25519:[email protected]:15148
Jan 25 15:45:01.935  INFO stats: #      16 CtKDYPhFxX79m25tUY5cgM1fVdYXaFCfezdEA934hRvB V/1  0/0/40 peers ⬇ 0 B/s ⬆ 0 B/s 1.60 bps 827.30 Ggas/s CPU: 0%, Mem: 54.2 MiB 

Is there a way to silence this?

Add convenience functions for working with Account and Contract types

Instead of passing contract/account into worker.call(), we can add some wrappers around it within account/contract itself.

For instance, account.create_account or contract.view would be more ergonomic

Non-exhaustive list of helpers:

  • account

  • [x] create_account (sub-account)

  • create_and_deploy

  • [x] balance / available_balance

  • [x] delete (deletes the account, and sends balance to beneficiary)

  • contract

  • [x] call

  • [x] view

  • [x] view_code

  • view_state

  • patch_state (sandbox only)

Switch prints to actual logging solution

Currently there are some println and eprintln calls, but we probably want these to be an actual logging solution so that devs can show/hide/filter or configure their logs.

The most general use case would be using the log crate I think, but if we are specifically tying our impl with tokio, we might want to use their tracing crate

Refactor `TopLevelAccountCreator` to enable mainnet to do top level account creation

Currently, the function signatures provided by TopLevelAccountCreator trait only allows specifying an account-id and secret-key but this doesn't fully support account creation in mainnet since we require someone to pay for the transaction and have an initial deposit be sent as well. Might be a good idea to allow parameterizing this further with builders just like how account.create_subaccount returns an account builder to allow specifying more details. It would be best to unify it with that too if possible if we go down this route.

Performance issues

Here is a repo I made and used workspaces-rs to test with: https://github.com/TENK-DAO/vip-list

For me the single test take 200s to run on sandbox. Compared to ~11s to run the JS version.

yarn

yarn test # runs rust

yarn ava # runs js

Do I have something wrong with my setup? I gather that for other people 200s is not the norm.

Investigate inner JSON RPC serialization

Currently, default args supplied to calling into a contract change function is an empty json map {}. This is required when we want to supply no arguments to a change function that take all Optionals. This might just be a weird case in how jsonrpc is interpreting/serializing the bytes underneath.

Additionally, functions that take no arguments are fine with either empty bytes (Vec::new()) or the empty json map (json!({})).

We'll need to take a look closer to see whether this is the right thing to be using or not. But for now, json({}) will be the default.

Original discussion of this here: #33 (comment)_

ref-finance example fails to low storage deposit

error:

Error: Action #0: ExecutionError("Smart contract panicked: panicked at 'ERR_STORAGE_DEPOSIT need 3290000000000000000000, attatched 3000000000000000000000', ref-exchange/src/lib.rs:372:14")

Will investigate or add more details tomorrow, just wanted to open so I didn't forget

Do not have access to logs from call result

The results from view and call probably need to be cleaned up such that we don't expose anything unmaintainable, but also that we include all necessary data.

We can either just add logs to the call result object for this issue, or discuss and make sure we have a good long term API plan and stick to this

Make genesis configurable

near-sdk-sim allowed users to specify their own genesis with custom parameters. Among other things, you could manipulate gas price to see how your contract would behave when gas price jumps up 10x. workspaces lacks this flexibility right now which is (kind of) blocking the remaining work in near/near-sdk-rs#563.

Allow empty args for call/view

Currently expects at least empty JSON. This is defaulted within the CallBuilder to be this, but it should be absolutely valid to be empty bytes or some other protocol to JSON. This might involve making a change to the RPC client, just opening issue here for tracking

Speedup iteration on public API design

I've noticed that a couple of times it happened that I am tagged on a review for a large PR, and leave a "let's change the world" comment like "why is this using macros?" and "why is this using traits?", which sometimes block the actual technical work. I wonder if there's a better design process possible?

In general, I think it's very important to get the design of the public APIs used by developers in the near ecosystem right. It's not something we can lightly change later. While we technically can release workspace versions 1.0, 2.0, 3.0, etc, that's not great for a the ecosystem. We should strive to be like serde, which is at 1.0.130 right now. It's hard to ship a perfect API right of the bat, but it makes is so much easier to build stuff on top of it.

Here are some suggestions on how we can converge on good APIs faster:

  • Make it easy to see what the public API is, by separating it out of impl details. It's faster to review a design PR with public API's which just todo!()s all implementations, and review the impl separately.
  • Get feedback on Rust API design from several people earlier, and proactively. Dropping a "hey, we design this new thing which is going to be our public API" message to https://near.zulipchat.com/#narrow/stream/300659-Rust-.F0.9F.A6.80 might be a good idea.
  • Maybe separate "API design" into a separate chunk of work – start with a shared google doc/hack md document which layous out how the API works, get feedback on that, and then proceed to the impl.

Setup GitHub actions for this repo

Need to setup some basic CI for this repo, since a couple of things can get thru the cracks, such as formatting and unused imports. Can just be GitHub actions for now since that seems to be the simplest

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.