Giter Site home page Giter Site logo

dojoengine / dojo Goto Github PK

View Code? Open in Web Editor NEW
390.0 14.0 147.0 11.68 MB

Dojo is a toolchain for building provable games and autonomous worlds with Cairo

Home Page: https://dojoengine.org

License: Apache License 2.0

Cairo 8.55% Rust 88.65% Shell 1.46% Dockerfile 0.16% Makefile 0.17% Solidity 0.71% TypeScript 0.30%
cairo ecs game-development rust

dojo's Introduction

Dojo Feature Matrix

Dojo: Provable Games and Applications discord Telegram Chat Github Actions

Dojo is a developer friendly framework for building provable Games, Autonomous Worlds and other Applications that are natively composable, extensible, permissionless and persistent. It is an extension of Cairo, an efficiently provable language, that supports generation of zero-knowledge proofs attesting to a computations validity and enables exponential scaling of onchain computation while maintaining the security properties of Ethereum.

It is designed to significantly reduce the complexity of developing provable applications that can be deployed to and verified by blockchains. It does so by providing a ~zero-cost abstraction for developers to succinctly define provable applications and a robust toolchain for building, migrating, deploying, proving and settling these worlds in production.

Getting Started

See the getting started section in the Dojo book to start building provable applications with Dojo.

You can find more detailed documentation in the Dojo Book here.

Development

We welcome contributions of all kinds from anyone. See our Development and Contributing guides for more information on setting up your developer environment and how to get involved.

If you encounter issues or have questions, you can submit an issue on GitHub. You can also join our Discord for discussion and help.

Built with Dojo

Audit

Dojo core smart contracts have been audited:

dojo's People

Contributors

0xicosahedron avatar akhercha avatar ametel01 avatar aymericdelab avatar broody avatar cheelax avatar eightfilms avatar ftupas avatar gianalarcon avatar glihm avatar greged93 avatar jonatan-chaverri avatar junichisugiura avatar kariy avatar lambda-0x avatar larkooo avatar milancermak avatar neotheprogramist avatar notv4l avatar ponderingdemocritus avatar ptisserand avatar raresecond avatar remybar avatar rkdud007 avatar shramee avatar tarrencev avatar tcoratger avatar whatthedev-eth avatar wraitii avatar xjonathanlei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dojo's Issues

Implement contract verification during migration

When declaring new contracts, verify them with starkscan.

We should do this in a generic way to support future verification implementations.

Here is the Starkscan API info:
JSON def:
https://github.com/starkscan/starkscan-verifier/blob/acf3762c3ede91fe22a661e5b74c2e688289ffee/src/types.ts#L5
Endpoint:
https://github.com/starkscan/starkscan-verifier/blob/acf3762c3ede91fe22a661e5b74c2e688289ffee/src/api.ts#L78

Here is the Voyager API repo:
https://github.com/NethermindEth/voyager-verify

Simple game for bevy-dojo indexer testing

The game is:

  • Game in Bevy
  • A grid (map)
  • Players are rendered as a (random) block on the grid
  • Players can move the box with arrow keys

Then we will proceed with a testing a shared instance to update components in #137.

Extend component to describe schema + packing

Generate a schema definition for components to support introspection

Something like

    fn schema() -> Array<(kind, len)> {
        Array::new([
            ('felt252', 252),
            ('u32', 252),
        ])
    }

We generate the component implementations here:

body_nodes.push(RewriteNode::interpolate_patched(

We can iterated over a structs members like this:

struct_ast.members(db).elements(db).iter().for_each(|member| {

For the initial implementation, len should always be 252. We can implement bit packing as a follow up.

Once you've edited the codegen you can update the generated output in https://github.com/dojoengine/dojo/blob/main/crates/dojo-lang/src/plugin_test_data/component with

CAIRO_FIX_TESTS=1 cargo test --package dojo-lang --lib -- plugin::test::expand_contract::component --exact --nocapture 

Implement World contract

The world contract acts as a global namespace.

Its primary responsibilities are:

  • Deploy + register components into the world
  • Maintain registry of entities and components, enabling entity/component lookups using selectors
  • Mux events from constituent components

yarn add @dojo

We need to build an npm package that allows developers to easily integrate Dojo worlds.

We start with a low-level library first, from which React hooks and Vue injections, and other js libraries can use.

This library should be built with multiple providers in mind. We should maintain support for:

  • RPC (first release)
  • Web Sockets
  • WebRTC (future)
  • Beerus

Where possible, the library should expose commands that mirror the Dojo commands to streamline understanding between the libraries.

In Dojo, we can get a component by:

let player = commands::<(Health, Name)>::get(player_id);

Correspondingly our functions should be designed around the same get, set pattern by passing in the entity id. From these queries, the state of the entities can be created and displayed to the client.

So at a minimum

Once this package is done, we can craft the React SDK with it.

class Dojo implements IData {
  [key: string]: any;

  private rpcEndpoint: string | undefined;
  private rtcEndpoint: string | undefined;

  constructor(initialData: IData, rpcEndpoint?: string, rtcEndpoint?: string) {
    Object.assign(this, initialData);
    this.rpcEndpoint = rpcEndpoint;
    this.rtcEndpoint = rtcEndpoint;
  }

  // Generic execute function
  async execute<T>(method: string, params: any[]): Promise<T> {
    if (method.startsWith('rpc.')) {
      return this.callRPC<T>({ method, params });
    } else if (method.startsWith('rtc.')) {
      return this.establishWebRTCConnection(params[0]);
    } else {
      throw new Error('Invalid method prefix. Must be "rpc." or "rtc."');
    }
  }

  // RPC Function
  private async callRPC<T>(rpcParams: IRPCParams): Promise<T> {
    if (!this.rpcEndpoint) {
      throw new Error('RPC endpoint is not defined.');
    }

    // Implement your RPC call here
    // For now, let's mock the response with a simple Promise
    return new Promise((resolve) => {
      setTimeout(() => {
        resolve(`RPC call to ${rpcParams.method} with params: ${rpcParams.params} at ${this.rpcEndpoint}`);
      }, 1000);
    });
  }

  // WebRTC Function
  private async establishWebRTCConnection(roomId: string): Promise<void> {
    if (!this.rtcEndpoint) {
      throw new Error('RTC endpoint is not defined.');
    }

  }
}

Something like this?

Thoughts @tarrencev @fracek ?

Migration planning

Given addresses of every component / system in the world is deterministically addressable, the dojo migrate cli takes a world address as entrypoint and diffs the onchain state with the compiled state, generating a deploying + migration plan for declaring and registering new components and / or updating existing components.

The initial task is to compute addresses for all the components / systems in the world directory. Once we have those, we can add logic to diff it from the existing onchain state

Hookup `cairo-lang-dojo` cli

Related to #2, figure out how to properly initialize and compile dojo ecs.

Right now we have the test cases running with a fake db. With the cli, it should use the proper language db and pipeline the plugins, resulting an starknet contract artifacts being outputted with (optional) intermediate outputs.

Eventually it seems like cairo will support dynamic language plugins, but that might be awhile away, so i think wrapping it for now makes sense.

https://github.com/dojoengine/dojo/blob/main/crates/cairo-lang-dojo/src/cli.rs

Add Release GH Action

Build and tar binaries based on pushed tag to the repository.

Initially build for macOS x86_64 and linux x86_64 and include:

  • dojo
  • dojo-indexer
  • dojo-language-server
  • dojo-test
  • prinsma-cli

Generate component protobuf interfaces

During compilation, we can output protobuf definitions for component structs which can be used by the server / apibara to provide strictly typed event interfaces.

Trait compilation not working as expected

It is possible to compile broken traits currently.

To reproduce, just create a known error in a trait function and try to build. The build will succeed, but it clearly is not correct

feat: Dojo SDK for Indexer and Contracts

It makes sense for us to build a react SDK for Dojo first, before the completion of Dojoscan, as Dojoscan itself would use it killing two birds with one stone.

Dojo SDK is just a react hook library:

  • Indexer address and options
  • useDojo hook to fetch the state/ query indexer
  • State reconstruction functions

It could also wrap up useStarknet so developers only need to install Dojo and not both libraries.

This should be a fairly straightforward lift and then gives an easy integration point into React-only games and websites. Eternum will probably use R3F for the foreseeable future.

Should this exist in a standalone repo?

Refactor: Improve `migration` world representation

During migration planning, we construct a local and remote representation of the world, in order to diff them and generate a migration plan.

The current code is at:

struct World {

It is pretty hacked together right now and can be improved by using the manifest generated by the build step.

impl Manifest {

In particular, the component hashing logic can be moved to the build step:

serde_json::to_writer_pretty(file.deref_mut(), &class)

The manifest can then be extended to include a from_rpc method that generates a manifest based on the remote source.

Finally, we can introduce migration planning logic which takes a local + remote manifest and generates a strategy to transition the remote state to match the local.

Remove assignment requirement from `commands::set`

Currently, it is required to assign commands::set to a variable, since he variable is used to uniquely identify the underlying calldata array that is generated.

let mut __$var_name$_calldata = ArrayTrait::new();

Ideally we can remove this constraint. Ideally we could inline the creation and avoid the assignment but i dont think that is possible. One approach could be to construct an identifier using the component types and the storage key. Another option is to namespace the operation with a nested module.

Commands:: not recognized inside if statement

When trying to use commands:: within an if statement, I get a error: Plugin diagnostic: Identifier not found.

Example:

let drug = commands::<Drug>::get((game_id, (player_id, drug_id)).into());
if drug.is_some() {
    let _ = commands::set((game_id, (player_id, drug_id)).into(), (
        Drug { 
            id: drug_id, 
            quantity: updated_quantity
        }
    ));
}

try_entity() returns double wrapped option

It looks like I have to unwrap again after expect. Can we just return component?

let maybe_game = commands::<Game>::try_entity(game_id.into());
let game = maybe_game.expect('no game found');
let game = game.unwrap();  

The entity api is really nice though

Transaction signer

Implement a transaction signer interface and provide a base implementation which reads a private key from the local system.

The signer should be used duringmigration to sign and submit transactions.

Support commands outside systems

Currently, commands are only supported in system execute functions. This is mostly due to the challenge of parsing for commands and expanding the code.

We're working to introduce inline macros to cairo, which will make it much easier to support commands outside of systems.

World indexer

Implement an indexer to index world events in an efficient and easily queryable manner. The database should expose both historical and the current state of the world.

Objects like:

  • components
  • entities
  • systems
  • resources

Historical events like:

  • component / system / entity registration
  • entity state updates
  • system calls

The indexer could use the https://github.com/apibara/apibara engine to stream events from starknet and write them to a sqlite db that can be easily replicated and shared with others.

Create `world.toml` config file

Create a world.toml file that extends cairo_project.toml to include the world address

Currently we read cairo_project.toml here:

let mut config = ProjectConfig::from_directory(&source_dir).unwrap_or_else(|error| {

The world address needs to be made available here: https://github.com/dojoengine/dojo/blob/main/crates/dojo-lang/src/query.rs#L49

It can be provided as an attribute on the db https://github.com/dojoengine/dojo/blob/main/crates/dojo-cli/src/build.rs#L68

Ids are felt

May I suggest using usize instead of felt?
what kind of operations will you be making with them?
If not a lot, I think usize is better:

  1. Faster sequencing, since it will run faster on x86.
  2. Future proofing if we change fields in cairo

Implement entity query api

Systems operate on entities in the world. In order to define the set of entities to operate on, dojo should expose a query api similar to the bevy engine.

The api should be expressive, allowing developers to easily construct complex queries, which are interpreted, optimized and inlined at compile time to reduce onchain compute.

The interface should follow the bevy interface:

fn move(query: Query<(Position, Health)>) {
    // @NOTE: Loops are not available in Cairo 1.0 yet.
    for (position, health) in query {
        let is_zero = position.is_zero();
    }
    return ();
}

Expansion:

#[contract]
mod MoveSystem {
    struct Storage {
        world_address: felt,
    }

    #[external]
    fn initialize(world_addr: felt) {
        let world = world_address::read();
        assert(world == 0, 'MoveSystem: Already initialized.');
        world_address::write(world_addr);
    }

    #[external]
    fn execute() {
        let world = world_address::read();
        assert(world != 0, 'MoveSystem: Not initialized.');

        let position_id = pedersen("PositionComponent");
        // We can compute the component addresses statically
        // during compilation.
        let position_address = compute_address(position_id);
        let position_entities = IWorld.lookup(world, position_id);

        let health_id = pedersen("HealthComponent");
        let health_address = compute_address(health_id);
        let health_entities = IWorld.lookup(world, health_id);

        let entities = intersect(position_entities, health_entities);

        for entity_id in entities {
            let is_zero = IPosition.is_zero(position_address, entity_id);
        }
    }
}

World factory

Currently spawning a world requires several transactions to deploy the contract + register systems + components

We should implement a world factory that does this atomically, taking as arguments a world name, systems, and components. Additionally, it should expose methods to update the canonical world and executor class hashes

trait WorldFactory {
    fn set_world(class_hash: ClassHash);
    fn set_executor(class_hash: ClassHash);
    fn spawn(name: felt252, components: Array<ClassHash>, systems<ClassHash>);
}

ERC* System + Components

In order to make the properties of existing standards available in a dojo world, we need to implement them with components and systems.

For example, an ERC20 implementation could consist of:

FungibleMetadata + Balance components
Transfer, Mint system

That would allow the world to easily interact with a users token balance.

In addition, we would provide a ERC20 wrapper that implements the ERC20 interface and makes calls to the world contract / transfer system in order to perform transfer, return user balances, ect

Discussion: Generic bitmapping at component level

Following on from this:

https://twitter.com/tarrenceva/status/1624458689506852864?s=46&t=poayF_lDwCo9q3jVBgaWsQ

Developers do not want to worry about performance and gas cost optimisation when building the game, it should just work. In Eternum we had custom bitmaps per component. This was fine but requires a lot of code.

Whilst mapping out Survivor game I was thinking about how we could achieve this in a generic and simple way.

This may all not even be relevant after proto-danksharding, but still should be explored.

If we include in the component the bit size of the struct shape, then we could create a generic bitmap for each component with the storage index defined by the entity_id as the index and pack every value tightly for that component. Think of it as a bitpacked table, where we can lookup the value of any entity if provided the entity_id, the struct interface and the bits of the struct storage. From that we can read or write to any location within that component.

To read a value we just need

  • Entity ID 1
  • Struct interface
  • Struct Bits

From this information, we can infer where the data that we want is hiding in the felt table. We do this by calculating how many slots are within the felt according to the bit size.

So

251 bits in felt / 8 bits for storage in component = 31 potential locations within each felt

So to get the value we need we would:

Entity ID 1

// component bits
const bits = 8;

read(entity_id)
num_bits_per_storage = 31
felt_index = entity_id < num_bits_per_storage ? entity_id : (entity_id / num_bits_per_storage * 31) + entity_id
storage_index = entity_id < num_bits_per_storage ? entity_id : (entity_id / num_bits_per_storage)

// this function would retrieve the packed value at the `storage_index` and then unpack according to the `felt_index`
get_value(storage_index, felt_index, bits) -> (value)

In the actual function, we would need to do the casting etc, but this should explain the rough idea.

World Sync

World Sync is a feature of the Dojo toolchain which queries the world contract on starknet, retrieves the current components of the world, and reconstructs them locally in Cairo. The goal is to improve the developer experience for systems / mods.

Use Red Black Tree for indexing

Currently, doing a scan over an indexed component doesn't return an ordered entity array, so when querying across multiple components, we need to sort before finding the intersection.

We can update to use a RBT to store components sorted by entity id and improve query performance.

World Config

Most games will have a rather large config that contains variables that are global to the world. The approach we are taking is:

WorldConfig.cairo - a unique component situated in the project's components dir.

This config contains global variables which any system can access, making it a component that allows new config components to be deployed and swapped out, effectively adjusting the global parameters of the world.

I am thinking the approach we take is:

Users add a WorldConfig.cairo into their component dir, at deployment time we check if the component is named -> we assign this address to a Storage location in the World contract.

Then the User can query getConfig on the world and returns the value where needed.

Thoughts? @tarrencev @milancermak @fracek

discussion: Dev container

There is quite a bit of setup right now to get the environment of the project ready.

This includes:

  • Rust
  • Nodejs 18
  • vsce plugins
  • build commands

I also expect this environment to evolve rapidly as we add new tools and build commands.

Dev containers come with the downside of the additional overhead of running a container. However, if developers choose to do a clean build on their machine they could.

Looking for feedback @tarrencev @fracek @milancermak

Consider automatically importing component dispatcher impl and trait

With the introduction of https://github.com/dojoengine/dojo/pull/168/files#diff-1d7a7dbdbca246355d93023e3fc830cc8af760cd932c03e965fd3a77f5bf553bR28-R32, which resolves #168, developers need to manually import the library dispatcher impl and trait.

    use super::Position;
    use super::IPositionLibraryDispatcher;
    use super::IPositionDispatcherTrait;
    use super::Player;
    use super::IPlayerLibraryDispatcher;
    use super::IPlayerDispatcherTrait;

We could handle this automatically. The downside is it's an implicit behavior that might be confusing if the developer tries to import themselves. Although I'm not sure in what case they would do that.

Compute component contracts address

Given all components in a world are deployed by the world, we can compute a components address given it's component id:

let module_id = pedersen("<module_name>");
let address = deploy(
    class_hash=proxy_class_hash,
    contract_address_salt=module_id,
    constructor_calldata_size=0,
    constructor_calldata=[],
    deploy_from_zero=FALSE,
);
IProxy.set_implementation(class_hash);
IPositionComponent.initialize(address, ...);
registry.write(module_id, address);

Currently we don't know the proxy class hash so we can stub that.

starknet-rs provides a helper for computing a contract address: https://github.com/xJonathanLEI/starknet-rs/blob/ab4752f2b26bbb43dec3450e4358d88fa7a496e2/starknet-core/src/utils.rs#L130

The method should be used here:

// TODO(https://github.com/dojoengine/dojo/issues/27): Properly compute component id

Tracking world state

So far we've planned to use a similar pattern to mud for tracking world state externally, emitting events from the world contract for indexers. I'm wondering if we could instead use the state diff directly, since we can constrain the storage address of the component state in the generated code, for example, the storage location for an entity's position state below would be:

pedersen(keccak('state'), entity_id)

#[contract]
mod PositionComponent {
    struct Position { x: felt, y: felt }

    struct Storage {
        world_address: felt,
        state: Map::<felt, Position>,
    }
}

Given the indexer knows about the components / systems that exist in the world, it could compute the entity the state update corresponds to. The tricky part is the storage address is not reversible, so the indexer would need to track entity id's that have been provisioned and compute/store the component state address.

The upside is the state diff is part of consensus state, so if we do this, clients can verifiably pull world state off rpc nodes using storage proofs, which would allow for light clients ect that don't need to replay all events to reconstruct the world state.

Tagging @fracek @ponderingdemocritus @milancermak for feedback.

Expose interactive ui for manage world migrations

This task is to expose an interactive ui for managing a worlds migration.

Once we have resolved #161 and #162 and have a migration strategy for a world, the cli should provide an interactive way to update the world to reflect the changes.

Updating a remote world involves:

Validation:

  1. Verify the caller has permission to update any exist components and systems that are registered
  2. Verify that the caller has permission to extend write permission to any components that are being updated by systems

Execution:

  1. Declaring any new classes (each declare requires a transaction iirc)
  2. Register classes by calling register_system and register_component on the world
    fn register_system(class_hash: ClassHash) {

2.1) For new classes, they can be registered permissionlessly
2.2) For existing classes, they can only be registered if the caller has permission (for now this will result in a transaction error, eventually we should expose a way for someone to propose an update that can be confirmed by someone with permission.

  1. Provide write access for systems to any necessary components.

3.1) If the component is new, this can be done permissionlessly
3.2) If the component is existing and the caller has permission to grant write access, do so
3.3) If the component exists but the caller doesn't have permission to grant, request access (not supported right now)

One approach here could be to use TUI to interactive visualize and manage the tasks. Happy to explore other simpler approaches too.

Include system dependencies in manifest

As part of the build step, we generate a world manifest.json that describes the components + systems compiled.

For example, the examples manifest looks like:

{
  "components": [
    {
      "name": "Moves",
      "members": [
        {
          "name": "remaining",
          "type": "core::integer::u8"
        }
      ]
    },
  ...
  ],
  "systems": [
    {
      "name": "MoveSystem",
      "inputs": [
        {
          "name": "direction",
          "type": "core::felt252"
        }
      ],
      "outputs": [],
      "dependencies": []
    }
  ]
}

This task is to track write dependencies during compilation, in order to properly generate a migration plan that potentially requires requesting write access to a component.

During compilation, auxiliary information is collected here:

pub struct DojoAuxData {

Updates to the aux data are integrated based on the returned PluginResult

systems: vec![format!("{}System", name).into()],

Set commands are parsed here:

impl CreateCommand {

The task here is to track components that are set by a system and include them as a dependency of the system in the manifest file.

With this information, the migration planner can identify when a systems execution will fail due to permissions issues

Include webrtc package

Webrtc is a specification for real time communication.

It supports various audio/video codecs as well as datachannels.

We could use an webrtc to support state channels. It can be used to establish a p2p connection between clients or can be proxied through a sfu for scalability. In SFU mode, we could use SFrame for e2e encryption of the state channel messages.

There is a robust implementation here:
https://github.com/webrtc-rs/webrtc

They have some examples of establishing data channels.

I've previously written a golang sfu at https://github.com/pion/ion-sfu

Figure out how Cairo plugins are intended to work

Right now, the cairo-lang-dojo plugin follows the implementation of the cairo-lang-starknet plugin. It is unclear exactly how the starkware team ultimately intends for plugins to be integrated into the pipeline.

The language database at https://github.com/dojoengine/dojo-beta/blob/main/crates/cairo-lang-dojo/src/db.rs defines the compilation pipeline.

Currently, the base derive plugin consumes all derive macros so the full compilation pipeline doesn't work (the #[derive(Component)] macro is consumed before the dojo plugin). I'm not sure if there is a different plan for defining custom derive expansions?

Currently, the simplest architecture is probably to do something similar to what the existing test does and output a starknet contracts which can be passed to the starknet compiler.

Pluggable authorization

I think it would be cool to have a pluggable access control systems. We can do this by defining a standard system interface that the world can register and use for authorization.

For example

#[derive(Component)]
struct Auth {
    role: felt252
}

#[derive(Component)]
struct Role {
    authorized: bool
}

#[system]
mod AuthorizeSystem {
    execute(caller_id: felt252, resource_id: felt252) {
        let auth = commands::<Auth>::get(caller.into());
        let role = commands::<Role>::get((auth.role, resource).into());
        assert(role.authorized, 'not authorized');
    }
}

And create systems for GrantRole, RevokeRole, Renounce Role too.

The Authorize system name would be reserved for this functionality.

About use of Scarb

Hey!

@tarrencev reached to me on private channels about the idea of integrating Scarb's compiler ops in Dojo. That's a discussion worth publicizing, so I'll write down my thoughts here:

Preface

  1. #40 seems to do a lot of stuff that is already worked out by Scarb's ops::compile.
  2. Scarb is meant to be available as a crate. The only blocker from publishing it to crates.io as of the time of writing this, is that we still depend on unpublished Cairo revisions.
    1. One thing though that I am still thinking of is whether it wouldn't be better to extract Scarb compiler logic (i.e. the ops::compile function to a separate crate), and perhaps even make the compiler an extension provided as a separate library
    2. But this is a nuance at this moment, Dojo and others shouldn't feel this change super hard if it would happen
  3. The only trick for you, I can see is that currently Scarb doesn't provide an entrypoint to boot the compiler with custom plugins. This may be something I would welcome you to contribute if you wish because definitely I and @maciektr don't have spare time at this moment to tackle this ๐Ÿ˜ž

Scarb and Dojo current state context

  1. The general framework of compiler extensibility is the targets mechanism.
  2. https://github.com/software-mansion/scarb/blob/6ba612b08c2e1b7ab5f1998e3d014648164c7eb7/scarb/src/compiler/targets/starknet_contract.rs basically has the code that is reimplemented in #40
    • I didn't dive into #40's code, but as far as I can logically infer, the only "extension" that #40 does is adding the DojoPlugin?
    • OK, you also extend corelib but that's a hack anyway that has to be fleshed out on Cairo language level, not ours.
    • I assume you might also want to provide a dojo package, but Scarb already has a solution for this: users should simply put dojo = ... in their Scarb.tomls [dependencies].

Plan for extending Scarb to support Dojo's needs

I see two phases here: one that does the meat done, but is still kinda hacky, and second phase which cleans everything up:

Things that I am open to accept contributions

What is crucially missing for Dojo are two things:

  1. There should be a possiblity to register custom targets dynamically:
    • That anyway has to be done when we'll deal with extension targets.
    • Something like type TargetRepository = HashMap<TargetName, Box<dyn Fn(...) -> Box<dyn TargetCompiler>> is missing.
    • This repository object could live within Config, i.e. add target_repository: TargetRepository field to the Config struct.
    • Dojo would then be able to initialize config, inject their target to the repository, and then call ops::compile ๐ŸŽ‰
  2. Next, it would be shame if Dojo had to copy-paste https://github.com/software-mansion/scarb/blob/6ba612b08c2e1b7ab5f1998e3d014648164c7eb7/scarb/src/compiler/targets/starknet_contract.rs, so extracting a common interface out of this would be fine.
    • No idea how this should look like, I would happily leave it to the contributor to think out and show me a suggestion.

The hacky part: all of this is still within scarb crate and is pubed from scarb::compiler module.

Things that should be done by myself, but can be deferred

Disclaimer: I am really not sure this should look like this because I didn't think into this deeply.

  1. The compiler part of the scarb crate probably has to be extracted into separate crates:
    1. scarb-build which will contain ops::compileand everything downstream, like TargetCompiler trait and lib target implementation.
    2. scarb-target-starknet-contract which will contain starknet-specific compiler logic.
  2. scarb-target-starknet-contract will depend on scarb-build, and both will depend on slimmed down scarb.
  3. The big benefit of this is that scarb will not depend on cairo-lang-* crates, that's funny, but this will help compilation times significantly I think. (This implies also extracting scarb fmt which is is going to land today).

I'll be very happy to hear your thoughts on this, and also criticism of my design decisions in Scarb. Many of decisions I made in Scarb haven't been validated yet publicly by anyone.

Dojo CLI

Create a dojo-cli crate with a rough (eventual) api generated by chat gpt below

Usage: dojo [command] [options]

Commands:
  build        Build the project's ECS, outputting smart contracts for deployment
  migrate      Run a migration, declaring and deploying contracts as necessary to update the world
  bind         Generate rust contract bindings
  inspect      Retrieve an entity's state by entity ID

Options:
  -h, --help   Show help information

Command "build":
  Usage: dojo build [options]

  Options:
    -h, --help  Show help information

Command "migrate":
  Usage: dojo migrate [options]

  Options:
    -h, --help     Show help information
    --plan         Perform a dry run and outputs the plan to be executed
    --world_address  World address to run migration on

Command "bind":
  Usage: dojo bind [options]

  Options:
    -h, --help  Show help information

Command "inspect":
  Usage: dojo inspect [options]
  Options:
    -h, --help  Show help information
    --id        Entity ID to retrieve state for
    --world_address  World address to retrieve entity state from

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.