0xpolygonmiden / miden-base Goto Github PK
View Code? Open in Web Editor NEWCore components of the Polygon Miden rollup
License: MIT License
Core components of the Polygon Miden rollup
License: MIT License
Currently, account IDs are 21 bytes long (technically 24 bytes, but the last 3 bytes are guaranteed to be all zeros). There is really no good reason for why they need to be this long. This is because we can prevent account collisions at account creating time.
Specifically, when a new account is being created, we must check that an account with such an ID does not already exits. This check is needed regardless of whether the IDs are 32, 21, or 16 bytes long. Thus, in theory, we could make account IDs pretty small. But we probably don't want to make them too small for two reasons:
Given the above, my thinking is that we could safely reduce account ID size to 16 bytes (2 field elements). We would not be unique in this - Diem account addresses are 16 bytes long.
With grinding, we might even be able to go further: account IDs would be 16 bytes, but we could force 3 bytes to be zeros - thus, giving us ID size of 13 bytes. Though, we should probably think this through a bit more.
Most of the transaction kernel implementation was specified in #3. Next, we need to implement a minimal working tx kernel, including the prologue script, note setup script, and epilogue scripts, as well as any other required issues that come up along the way.
The dependencies for this are roughly the following:
mtree_cmw
to mtree_cset
with slightly different semantics@hackaugusto, @frisitano, @bobbinth, @grjte,
The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:
As mentioned in #7 (reply in thread), it would be beneficial to add one more component to the Note
object. This component would define a set of values which would be put on the top of the stack before note scripts start executing. Internally, this would be just a vector of field elements (similar to how we have a vector of assets).
Adding this component would affect how we compute note hash and note nullifier. In both cases, I think we should compute the hash of inputs first, and then include this hash in the note hash and nullifier computations.
This is a placeholder issue for the Miden RPC endpoint
The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.
The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. The Block Producer has an endpoint (RPC) for Miden Clients and others using RPC requests. We need to define an endpoint for that.
[WIP]
We need to make sure that the set of input assets into a transaction is the same as the set of output assets. This can be achieved as follows:
input_vault
).One open question is how to handle mint
and burn
procedures of faucet accounts. One option is to keep track of minted/burned assets and process them accordingly during the epilogue. Another option is to modify the input_vault
with each call to mint
/burn
procedures - this way, not additional work is needed during the epilogue.
We must implement a NoteMetadata
struct that should include sender
(1 element), num_assets
(1 element) and a tag
(1 element). We also need a method to convert to a Word
. We should update ConsumedNotesInfo
and Note
to use this struct.
The struct should look as follows:
struct NoteMetadata {
sender: Felt,
tag: Felt,
num_assets: Felt,
}
The word representation should be:
[sender, tag, num_assets, 0]
Currently the number of assets associated with a consumed note or created note has it's own slot in memory. With the proposed change above we will be including the num_assets in the note metadata. We should modify transaction kernel to reflect this change.
From initial assessment no changes required. Should confirm.
Memory slot for
Slot for metadata already exists.
compute_created_note_vault_hash
procedure. Should confirm.We should refactor the kernel inline docs to align with the standards concluded in the masm documentation discussion. We should ensure that we include the number of cycles and state of the stack at appropriate locations in the execution pipeline.
We have recently introduced support for the advice_map
in the advice provider. As such we should update the miden::sat::prologue::process_consumed_notes_data
procedure to leverage this. As it currently stands it uses the advice_stack
.
The full list of kernel procedures now is as follows:
Procedure | Context | Status |
---|---|---|
miden::sat::account::get_id |
account, note | โ |
miden::sat::account::get_nonce |
account, note | โ |
miden::sat::account::get_initial_hash |
account | โ |
miden::sat::account::get_current_hash |
account | โ |
miden::sat::account::incr_nonce |
account | โ |
miden::sat::account::add_asset |
account | โ |
miden::sat::account::remove_asset |
account | โ |
miden::sat::account::get_balance |
account | โ |
miden::sat::account::has_nfasset |
account | โ |
miden::sat::account:get_item |
account | โ |
miden::sat::account:set_item |
account | โ |
miden::sat::account::set_code |
updatable account | โ |
miden::sat::faucet::mint |
faucet account | โ |
miden::sat::faucet::burn |
faucet account | โ |
miden::sat::note::get_assets |
note | โ |
miden::sat::note::get_sender |
note | โ |
miden::sat::tx::get_block_number |
account, note | โ |
miden::sat::tx::get_block_hash |
tx | โ |
miden::sat::tx::get_input_notes_hash |
account, note | โ |
miden::sat::tx::get_output_notes_hash |
account, note | โ |
miden::sat::tx::create_note |
account | โ |
miden::sat::tx::add_asset_to_note |
account |
The Miden Client must be able to store state data itself. Miden accounts can live either on-chain or off-chain. For on-chain accounts, the full account state is always recorded on-chain - we mean on Miden. For off-chain accounts, only the commitment to the account state (i.e., state hash) is recorded on-chain.
So the Miden Client for the testnet must be able to store all account data itself in a database.
I propose we introduce a memory
module which holds procedures relevant to calculating memory addresses relevant to library. As all of the constants we use in the library are related to the memory layout I think we should be able to centralise all constants in this single memory
module.
There have been a lot of discussions, but we need to bring all this info into one place and hammer out the details so we can move forward with the rollup implementation.
@bobbinth, @grjte, @vlopes, @hackaugusto
The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:
This is a placeholder issue for the Miden Transaction Prover. The Miden Client (user facing) and the Miden Node will use the Transaction Prover to prove the correctness of a transaction execution (using the tx kernel).
WIP
Currently, we set the maximum number of assets in the note at 1000. This number is somewhat arbitrary, and I'm thinking that a different number would work better.
My thinking is that at the very least we should reduce the maximum number of assets to 256. This way, the number can be encoded using exactly 1 byte. We could go even further and set the maximum at 16 - but I wonder if that's too restrictive.
For note script inputs (see #9), we should probably set a reasonable maximum as well. I am thinking 16 would probably be a good number as it doesn't require messing around with the overflow table and could make recursive proof verification simpler.
Currently, data for non-fungible assets is stored as a simple vector of bytes. It could be good to add some structure to this (see #2 (comment)). The key properties that we'd like to have for NFA data are:
In #5 we introduced a Note
object. This object can be relatively big and so we cannot reduce it to something as simple as a Word
which worked for AccountId
and Asset
. Thus, we need to implement serialization of notes similar to what we've done for Miden assembly.
Specifically, I'm thinking we should use the same approach with Serializable
/Deserializable
traits. The only thing is that it is a bit weird that these traits currently live in miden-assembly
. My thinking is that they should be in miden-core
. So, maybe the first thing here is to move these traits into miden-core
and then implement serialization here.
We should document the memory layout and kernel semantics using diagrams with in-depth explanations.
A valid account seed must contain a certain number of trailing zero's. The trailing zeros serve as account creation proof of work. To improve the efficiency of seed generation we should implement a multi-threaded seed generator.
related: #33
The Client must be able to sign transactions and messages using the optimized Falcon verification.
We need to spec out what that means exactly.
This is a placeholder issue for the Miden Transaction Aggregator
The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.
The Transaction Aggregator is a module of the Miden Node and will batch Transaction Proofs together. In Miden, there is proof for every transaction. Transaction proofs will be aggregated in batches by the Transaction Aggregator.
This is a placeholder issue for the Miden State Databases
The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.
The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. The Block Producer keeps track of the state in the Miden Rollup. The module maintains three databases to describe the state:
These databases are represented by authenticated data structures (e.g., Merkle trees), such that we can easily prove that items were added to or removed from a database, and a commitment to the database would be very small.
[WIP]
Transaction prologue is a program which is executed at the beginning of a transaction (before note scripts and tx script are executed). The prologue needs to accomplish the following tasks:
Before a transaction is executed, the stack is initialized with all inputs required to execute a transaction. I'm thinking these inputs could be arranged like so (from the top of the stack):
The shape of global inputs still requires more thought - so, I'll skip it for now - but how the rest works is fairly clear. Specifically, we need to read the data for account and notes from the advice provider, write it to memory, hash it, and verify that the resulting hash matches the commitments provided via the stack.
Overall, the layout of root context's memory could look as follows:
The bookkeeping section is needed to keep track of variables which are used internally by the transaction kernel. This section could look as follows:
Memory address | Variable | Description |
---|---|---|
tx_vault_root |
Root of the vault containing all asset in the transaction. | |
num_executed_notes |
Number of notes executed so far during transaction execution. | |
num_created_notes |
Number of notes created so far during transaction execution. |
There will probably be other variables which we need to keep track of, but I'll leave them for the future.
As mentioned above, I'm skipping this for now.
This section will contain account details. Assuming
Memory address | Variable | Description |
---|---|---|
acct_hash |
Hash of the account's initial state. | |
acct_id |
ID of the account. Only the first 2 - 3 elements of the word are relevant. | |
acct_code_root |
Root of the account's code Merkle tree. | |
acct_store_root |
Root of the account's storage Sparse Merkle tree. | |
acct_vault_root |
Root of the account's asset vault. | |
acct_nonce |
Account's nonce. Only the first element of the word is relevant. |
This section will contain details of all notes to be consumed. The layout of this section could look as follows:
Assuming
Memory address | Variable | Description |
---|---|---|
num_notes |
Number of notes to be consumed in this transaction. | |
nullifiers |
A list of nullifiers for all notes to be consumed in this transaction. | |
notes |
A list of all notes to be consumed in this transaction. |
Here, nullifier for note0
is at memory address note1
is at memory address
At address note0
start at address note1
start at address
Assuming,
Memory address | Variable | Description |
---|---|---|
note_hash |
Hahs of the note. | |
serial_num |
Serial number of this note. | |
script_hash |
MAST root of this note's script. | |
input_hash |
Sequential hash of this note's inputs. | |
vault_hash |
Sequential hash of this note's assets. | |
num_assets |
Number of assets contained in this note's vault. | |
assets |
A list of all note assets laid out one after another. |
Here, each asset occupies 1 word, and thus the first asset in the note will be at address
We do not "unhash" inputs because they are needed only when we start executing a given note - so, we can unhash them then.
This section will contain data of notes created during execution of a transaction. It is not affected by transaction prologue - so, I'll skip it for now.
tx vault
To build a unified transaction vault we can do the following:
To implement we need to have compact Sparse Merkle tree implemented in Miden assembly.
One of the inputs into transaction kernel will be a commitment to notes DB. To verify that notes are in the notes DB we'd need to do the following:
To do this, we need to have Merkle Mountain Range implemented in Miden assembly, and also define better the shape of global inputs.
We should define some set of "well-known" scripts which user could use to transfer assets between their accounts. Two of the simplest such scripts could be:
P2ID
).P2IDR
).These scripts are described in detail below.
This script can be used when a note is intended to deposit all of its assets into a specific account. The pseudocode for this script looks as follows:
begin
assert(account.get_id() == note.inputs[0])
for a in note.assets
account.receive_asset(a)
end
end
The script assumes that a note comes with a single input which contains ID of the recipient's account.
This script can be used when a note is intended to deposit all of its assets into a specific account within some period. If this time expires, another user (presumably the send of the note) will be able to claim all of the note's assets. The pseudocode for this script looks as follows:
begin
if chain.get_block_height() > note.inputs[0]
assert(account.get_id() == note.inputs[1] || account.get_id() == note.inputs[2])
else
assert(account.get_id() == note.inputs[1])
end
for a in note.assets
account.receive_asset(a)
end
end
The script assumes that a note comes with the following inputs:
input[0]
contains block height after which the note can be reclaimed by the sender.input[1]
contains the recipient's account ID.input[2]
contains the sender's account ID.These scripts require the following kernel procedures to exist:
Procedure | Description | Context |
---|---|---|
get account ID | Returns ID of the account in the current transaction. | account, note |
get note assets | Returns all assets in the current note. | note |
get note inputs | Returns all inputs for the current note | note |
get block height | Returns current block height for the transaction. | note, account |
They also assume that the recipient's and sender's accounts expose receive_assets
procedure which can be used to deposit a single asset into the account's vault.
Users use Miden Clients to interact in the network. The backend of any wallet that is used in Miden will be a Miden Client. Miden Clients consist of several components.
Prerequisites #29 (When the transaction kernel is done, we can start with the transaction prover module).
The Transaction Prover will be able to prove the correctness of a transaction. This is needed for Client-side proving our USP. See #50
The Client must be able to sign transactions and messages using the optimized Falcon verification. #56
The Miden Client needs an interface to an account on the Miden rollup. It must be able to read on-chain data. Additionally, the Miden Client should provide an interface to a user or an API to an existing wallet like MetaMask Snaps. Should be covered in #22
The Miden Client needs to be able to store all necessary account data. See #57
A discussion about the specification of the transaction object was started in #27, and an issue for the first type (ProvenTransaction) was already created in #38. We should finalize the specification and create issues for the other types needed.
Notes on the tx kernel procedures needed in reference to transactions are also here:
As it stands all dependency versions are specified in the Cargo.toml
of the respective crate. This results in some duplication of dependencies and awkward management. Instead it may be more convenient to specify the versions at the workspace level such that all crates in the repo can source the versions from the a single location. This eases dependency management. https://doc.rust-lang.org/nightly/cargo/reference/specifying-dependencies.html#inheriting-a-dependency-from-a-workspace
There are certain properties that are relatively expensive to compute, e.g. account::get_current_hash
and tx:get_output_notes_hash
as they required a large amount of hashing to be performed. We should implement a caching mechanic that allows us to only recompute the value when it has changed. link to original suggestion is here.
Ability to call user-accessible kernel procedures (listed in #67) should be restricted based on the following factors:
mint
and burn
procedures can be called only by faucets. This check can be performed simply by looking at the first two bits of the account's ID.caller
instruction to get the hash of the caller and then checking if the caller is in Merkle tree committed to by the account's code_root
.We should consider refactoring the implementation of the block header. This was initially proposed by @bobbinth here.
A couple of thoughts for the future:
- Should we dedicate one element to bookkeeping info? For example: version, timestamp?
- Should we split
state_root
intoaccount_root
andnullifier_root
?One potential argument for splitting
state_root
intoaccount_root
andnullifier_root
is that we could add a couple more procedures to the kernel - e.g.,miden::sat::tx:get_account_root
andmiden::sat::tx::get_nullifier_root
.
I agree and think it makes sense to implement these changes.
This issue describes the basic wallet interface which we should implement for the testnet. It could evolve into a reference wallet implementation, though, at this stage it is rather simplistic.
The interface defines 4 methods:
receive_asset
send_asset
auth_tx
set_pub_key
The first two of the above methods should probably be an interface on their own, and we should recommend that most accounts implement these methods.
The goals is to provide a wallet with the following capabilities
Not supported in this implementation:
The implementation probably could go into Miden lib, maybe under miden::wallets::simple
namespace - but there could be other options as well.
Below, we provide high-level details about each of the interface methods.
receive_asset
methodThe purpose of this method is to add a single asset to an account's vault. Pseudo-code for this method could look like so:
receive_asset(asset)
self.add_asset(asset)
end
In the above, add_asset
is a kernel procedure miden::sat::account::add_asset
described in #3 (comment).
Note: this method does not increment account nonce. The nonce will be incremented in auth_tx
method described below. Thus, receiving assets requires authentication.
send_asset
methodThe purpose of this method is to create a note which sends a single asset to the specified recipient. Pseudo-code for this method could look like so:
send_asset(asset, recipient)
self.remove_asset(asset)
tx.create_note(recipient, asset)
end
In the above, remove_asset
is a kernel procedure miden::sat::account::remove_asset
and create_note
is a kernel procedure miden::sat::tx::create_note
, both described in #3 (comment).
recipient
is a partial hash of the created note computed outside the VM as hash(hash(hash(serial_num), script_hash), input_hash)
. This allows computing note hash as hash(recipient, vault_hash)
where the vault_hash
can be computed inside the VM based on the specified asset.
Note: this method also does not increment account nonce. The nonce will be incremented in auth_tx
method described below. Thus, sending assets requires authentication.
auth_tx
methodThe purpose of this method is to authenticate a transaction. For the purposes of this method we make the following assumptions:
hash(account_id || account_nonce || input_note_hash || output_note_hash)
using Falcon signature scheme.Pseudo-code for this method could look like so:
auth_tx()
# compute the message to sign
let account_id = self.get_id()
let account_nonce = self.get_nonce()
let input_notes_hash = tx.get_input_notes_hash()
let output_notes_hash = tx.get_output_notes_hash()
let m = hash(account_id, account_nonce, input_notes_hash, output_notes_hash)
# get public key from account storage and verify signature
let pub_key = self.get_item(0)
falcon::verify_sig(pub_key, m)
# increment account nonce
self.increment_nonce()
end
It is assumed that the signature for falcon::verify_sig
procedure will be provided non-deterministically via the advice provider. Thus, the above procedure can succeed only if the prover has a valid Falcon signature over hash(account_id || account_nonce || input_note_hash || output_note_hash)
for the public key stored in the account.
All procedures invoked as a part of this method, except for falcon::verify_sig
have equivalent kernel procedures described in #3 (comment). We assume that falcon::verify_sig
is a part of Miden standard library.
Open question: should the signed message be different? For example, maybe we should include hash of the entire account state (initial and final) into the message hash as well?
set_pub_key
methodThe purpose of this method is to rotate an account's public key (i.e., replace the current key with a new value). For the purposes of this method we make the following assumptions:
hash(account_id || account_nonce || old_key || new_key)
using Falcon signature scheme.Pseudo-code for this method could look like so:
set_pub_key(new_key)
# compute message to sign
let account_id = self.get_id()
let account_nonce = self.get_nonce()
let old_key = self.get_item(0)
let m = hash(account_id, account_nonce, old_key, new_key)
# verify signature
falcon::verify_sig(old_key, m)
# update the key at storage location 0 to a new value
self.set_item(0, new_key)
# increment account nonce
self.increment_nonce()
end
It is assumed that the signature for falcon::verify_sig
procedure will be provided non-deterministically via the advice provider. Thus, the above procedure can succeed only if the prover has a valid Falcon signature over hash(account_id || account_nonce || old_key || new_key)
for the public key stored in the account.
All procedures invoked as a part of this method, except for falcon::verify_sig
have equivalent kernel procedures described in #3 (comment). We assume that falcon::verify_sig
is a part of Miden standard library.
Examples of using the above interface are described below.
To receive funds into an account we'd need a note which invokes receive_asset
method. Script for this note could look something like this (this is actually identical to P2ID script):
note_script()
let target_account_id = self.get_input(0)
assert(account.get_id() == target_account_id)
for asset in self.get_assets()
account.receive_asset(asset)
end
end
The above script assumes that the recipient account ID is specified via note inputs.
In addition to the note, transaction consuming it would need to have a tx script which invokes auth_tx
method like so:
tx_script()
account.auth_tx()
end
To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || input_note_hash || output_note_hash)
against the public key stored in the account.
To send funds from an account we'd need to create a transaction which invokes send_asset
method as a part of its tx script. Tx script for such a transaction could look like so:
tx_script()
account.send_asset(<asset1>, <recipient1>)
account.send_asset(<asset2>, <recipient2>)
account.auth_tx()
end
To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || input_note_hash || output_note_hash)
against the public key stored in the account.
We can also combine receive_asset
and send_asset
methods to execute an atomic swap. A script for a note involved in the swap could look like so:
note_script()
let target_account_id = self.get_input(0)
assert(account.get_id() == target_account_id)
account.receive_asset(<asset1>)
account.send_asset(<asset2>, <recipient>)
end
In the above case, asset1
, asset2
, and recipient
are hardcoded into the note script. Anyone consuming this note will add asset1
into their account and will have to create carrying asset2
addressed to the specified recipient
.
To consume this note in a transaction, the transaction will also need to include a tx script which looks something like this:
tx_script()
account.auth_tx()
end
To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || input_note_hash || output_note_hash)
against the public key stored in the account.
To update public key of an account, we'd need to create a transaction which invokes set_pub_key
method as a part of its tx script. Tx script for such a transaction could look like so:
tx_script()
account.set_pub_key(<new_key>)
end
To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || old_key || new_key)
against the public key stored in the account prior to the update.
As part of the transaction prologue we need to authenticate that the notes being consumed in the transaction exist in the note db. This involves:
We currently have a TODO
placeholder in the prologue
here.
Below we see a diagram for how this data is structured:
This is dependent on having:
Transaction epilogue is a program which is executed at the end of a transaction (after note scripts and tx script are executed). The epilogue needs to accomplish the following tasks:
Thus, by the end of the epilogue, stack state of the VM would look like this:
Computing account hash should be fairly straightforward. The layout of account data in root context's memory is described in #14 - so, I won't go into in much detail here.
Computing a sequential hash of script roots of all consumed notes should also be pretty straightforward. Consumed notes are laid out as described in #14, and we just need to read MAST roots for each consumed note (located at memory offset
The reason why we need to do this is to bind the sequence of executed note scripts with the appropriate note inputs/assets. Basically, to prevent the prover from executing a note script of one note against inputs/assets of another note.
To compute hash of all created notes, we first need to compute hashes of all individual notes, and then sequentially hash all the individual note hashes together.
As described in #14, the overall layout of root context's memory would look something like this:
Thus, created note data section would start at memory offset note0
would start at note1
would start at
Assuming
Memory address | Variable | Description |
---|---|---|
recipient |
Note's recipient computed by hashing, the note's serial number, script hash, and input hash. | |
num_assets |
Number of assets contained in this note's vault. | |
assets |
A list of all note assets laid out one after another. |
Here, each asset occupies 1 word, and thus the first asset in the note will be at address
To compute a note's hash, we first need to compute the hash of the note's vault, and then compute hash(recipient, vault_hash)
. Then, once we have all individual note hashes, we can hash them sequentially to get a single commitment to all created notes. It also may be possible to interleave computation of note hashes and the overall commitment - but I'm not sure if this is going to be more efficient than doing it in stages.
This should be pretty simple: we can take the vault of the final account state and add all note assets to it one-by-one. This can be interleaved with computing vault hashes for the created notes as when we compute these hashes we need to place assets onto the stack anyway.
It may be convenient to add a sender
field to notes. This field will contain account ID of the account which created the note. This field could then be used by note consumers as a reliable way determining note origin.
The sender
field would need to be set in the transaction kernel at the time a note is created (i.e., via create_note
procedure).
To add this field we'll need to re-define note hash and nullifier computations as follows:
note hash: hash(hash(hash(hash(serial_num, [0; 4]), script_hash), sender), vault_hash)
nullifier: hash(serial_num, script_hash, vault_hash, sender)
The main drawback of this change is that it requires one extra hash when computing a note hash (however, nullifier computation complexity remains the same).
We should consider refactoring both consumed and created note data layout such that data is structured in such a way that aligns with kernel access / usage patterns. See comments below:
I wonder if we should arrange this a bit differently so that metadata and hash are next to each other. Specifically, it could go like this:
const.CONSUMED_NOTE_METADATA_OFFSET=0 const.CONSUMED_NOTE_HASH_OFFSET=1 const.CONSUMED_NOTE_CORE_DATA_OFFSET=2 const.CONSUMED_NOTE_SERIAL_NUM_OFFSET=2 const.CONSUMED_NOTE_SCRIPT_ROOT_OFFSET=3 const.CONSUMED_NOTE_INPUTS_HASH_OFFSET=4 const.CONSUMED_NOTE_VAULT_ROOT_OFFSET=5 const.CONSUMED_NOTE_ASSETS_OFFSET=6
It seems a bit cleaner and may be useful in the future for computing parent node of note metadata and hash.
To keep consistency with the previous comment, maybe the order should be:
const.CREATED_NOTE_METADATA_OFFSET=0 const.CREATED_NOTE_HASH_OFFSET=1 const.CREATED_NOTE_RECIPIENT_OFFSET=2 const.CREATED_NOTE_VAULT_HASH_OFFSET=3 const.CREATED_NOTE_ASSETS_OFFSET=4
Something else to consider is the way in which we compute the commitment for consumed notes. This is computed as a sequential hash over all (nullifier, script_root) tuples. I wonder if we could modify the layout / hashing patterns such that nullifier and script root are stored next to each other. This may allow us to use the mem_stream
operation when computing the consumed notes commitment.
const.CONSUMED_NOTE_METADATA_OFFSET=0
const.CONSUMED_NOTE_HASH_OFFSET=1
const.CONSUMED_NOTE_NULLIFIER_OFFSET=2
const.CONSUMED_NOTE_CORE_DATA_OFFSET=2
const.CONSUMED_NOTE_SCRIPT_ROOT_OFFSET=3
const.CONSUMED_NOTE_SERIAL_NUM_OFFSET=4
const.CONSUMED_NOTE_INPUTS_HASH_OFFSET=5
const.CONSUMED_NOTE_VAULT_ROOT_OFFSET=6
const.CONSUMED_NOTE_ASSETS_OFFSET=7
This is a placeholder issue for the Miden Transaction Pool for the Block Produer
The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.
The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. The Block Producer has an endpoint (RPC) for Miden Clients and others using RPC requests. Clients will send transactions and proofs thereof to the Block Producer. The Transactions need to be collected and queued in the Transaction Pool before being processed.
[WIP]
We should implement the block format in the rust codebase and also explore where we should unhash the block data in the kernel.
See the table below and the following comment for more insight on the format.
Field | Description |
---|---|
prev_hash |
Hash of the previous block's header (32 bytes). |
block_num |
Unique sequential number of the current block (4 bytes should be enough). |
chain_root |
A commitment to an MMR of the entire chain where each block is a leaf (32 bytes). |
state_root |
A combined commitment to Account, and Nullifier databases (32 bytes). |
note_root |
A commitment to all notes created in the current block (32 bytes). |
batch_root |
A commitment to a set of transaction batches executed as a part of this block (32 bytes). |
proof_hash |
Hash of a STARK proof attesting to the correct state transition (32 bytes). |
A ProvenTransaction
object is the result of executing and proving a transaction. It should contain the minimal amount of data needed to verify that a transaction was executed correctly. The object should consist of the following:
Component | Size | Description |
---|---|---|
Account ID |
|
The identifier of the account involved in the transaction. |
Initial account hash |
|
Hash of the account state at the beginning of the transaction. |
Final account hash |
|
Hash of the account state at the end of the transaction. |
Consumed note info |
|
A list of tuples (nullifier, script_root) for all notes consumed by the transaction. |
Created note info |
|
A list of tuples (note_hash, note_meta) for all notes created during the transaction. |
tx script root |
|
MAST root of the transaction script for the transaction (if any). |
Block reference |
|
Hash of the last known block at the time the transaction was created. |
Proof | variable | STARK proof attesting to the correct execution of the transaction program. |
A verifier would use the above information as follows:
script_roots
, tx_script_root
, and components of transaction kernel (i.e., prologue, epilogue, note setup script etc.).Inputs: [block_ref, acct_id, init_acct_hash, input_notes_hash, ...]
Outputs: [final_acct_hash, created_notes_hash, ...]
In the above, input_notes_hash
is a sequential hash of all (nullifier, script_root)
tuples of consumed notes, and created_notes_hash
is a sequential hash of all (note_hash, note_meta)
tuples of created notes.
An asset vault is an object that is used to manage assets. It is backed by a sparse merkle tree. The sparse merkle tree allows for authentication of assets via inclusion proofs. We need to implement the asset vault in miden assembly. It should provide the following standard functionality:
Procedure |
---|
add_asset |
remove_asset |
get_balance |
has_nfasset |
get_commitment |
This will be used by an account to manage it's assets. Furthermore it will be used by the transaction prologue and epilogue to ensure that there is no net change in asset balances across a transaction.
When ingesting asset data associated with a consumed note from the advice provider as part of the prologue (#14) it will be helpful if the asset data is padded to be a multiple of the rate width. This would involve padding with an additional word if the number of assets is an odd number.
This is a placeholder issue for the Miden Node
The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer. The centralized operator of the Miden Rollup will use the Miden Node.
In the future, the Miden Node must also be able to put epoch proofs to Ethereum. But this is not needed for the first testnet.
This is a placeholder issue for the Miden Block Producer
The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.
The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. In Miden, there is proof for every transaction. Transaction proofs will be aggregated in batches by the Transaction Aggregator. Batches will be aggregated into blocks by the Block Producer Module.
[WIP]
Currently, account IDs encode info which help us identify whether an account is a regular account, fungible asset faucet, or a non-fungible asset faucet. But there is additional info about an account which might be useful to encode into it as well. Specifically, I'm thinking about the following properties:
0
then the note is assumed to be stored off chain, and if it is 1
, then it is assumed to be stored on chain).On Miden we need to provide stablecoins. We cannot deploy ERC20 contracts on Miden. We can mimic most of the features also in Miden Token Accounts. The main problem might be how to handle the case if an address is being blacklisted on Ethereum but has already bridged stablecoins to Miden.
DeFi is one of the most widespread use case categories on blockchains, especially on Ethereum. Stablecoins drive DeFi. If Miden wants to become a relevant smart contract blockchain, it should provide the ability for stablecoins.
In theory, Miden enables all existing stablecoin projects to launch their stablecoin on the platform. The projects have already gained the user's trust and have ways to deal with the regulators. But it is also possible for Polygon to launch its stablecoin on Miden.
The most used stablecoins are smart contracts following the ERC-20 token contract standard (or the BEP-20 on Binance), see USDC, USDT, DAI, WeTH, BUSD. Those token contracts can be transferred onto other EVM-like blockchains, but not so easy to systems like Miden.
Stablecoins differ in the way how they keep a stable value. Fiat or Collateral backed stable coins like USDT or USDC might be easier to rebuild on Miden than stablecoins with more complex stability mechanisms like DAI.
In Miden, there are Asset accounts that issue assets; see 0xPolygonMiden/miden-vm#339.
Let's collect some ideas on how to provide USDC on Miden to Miden users.
There are other stablecoins and projects. I picked USDC as an example because it is the most used stablecoin by far - https://dune.com/hagaetc/stablecoins.
There are two ways for USDC to be deployed on Miden:
Other rollups, optimistic or zk, mint tokens on the rollup when those tokens are deposited on the Ethereum side of the token bridge. Those tokens get burnt whenever a user wants to bridge the tokens back.
In theory, there can be a USDC Asset account issuing as many USDC as there are in the rollup token bridge on Ethereum. On Asset accounts, one can set and change the admin, who can have the ability to mint more. Also, the query of the total supply should be an easy thing to do.
Circle wants to release native implementations of their tokens also on Rollups. The difference might be (it is not clear yet) that Circle has control of the token contract and can blacklist users and issue tokens.
Let's try to continue with the easier Mint and release approach. Easier because we don't need to align with Circle on how to implement that token.
For reference, the USDC smart contract (https://etherscan.io/token/0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48#code)
There might be more necessary features. From the listed features, the most problematic features seem to be to get the balance of some other user and to blacklist someone. Assets are stored in the accounts directly - there is no global list. And in Miden, it is possible to hold account data off-chain.
Since USDC is a centralized stablecoin - partly regulated under the SEC - there must be the ability to prevent users from using USDC (e.g. money laundering).
This is a placeholder issue for the Miden Block Kernel.
WIP
This is a placeholder issue for the Miden Batch Kernel.
The Batch Kernel is responsible for batching transaction proofs into single proofs using recursive verification. Basically, it proofs that it verified transaction proofs correctly using the Miden VM.
... [WIP]
WIP
Currently, account nonce is specified to be just a single field element (64 bits). However, as described in #22 (comment), it may be desirable to let users set account nonces to random values. However, we also want to minimize a chance of a user accidentally using the same nonce more than once as it may lead to some security problems. Thus, it is probably a good idea to increase the size of the nonce.
Assuming account ID is reduced to a single field element as suggested in #8, we have the account consisting of the following components:
Thus, if size of the nonce is 3 or fewer elements, we can compute hash of the account state in 2 permutations of the hash function.
So, to summarize, we can increase nonce size to 2 or 3 field elements without any impact on efficiency of computing account state commitment. I think 2 elements is probably enough, but open to other suggestions too.
Our current transaction kernel assumes that the account against which a transaction is being executed already exists. Originally, I was thinking that we'd need a separate kernel to handle account creation, but now I wonder if we could use the same kernel with a few minor modifications. These modifications would be:
Stack inputs for the transaction would be changed from:
[BH, acct_id, IAH, NC, ...]
To something like:
[BH, acct_id, is_new_acct, IAH, NC, ...]
This new is_new_account
input would be set to 0
if the account already exists and to 1
if to doesn't. Based on this input, the prologue would do the following:
0
: verify that that the account already exists (by checking against account_root
).1
: verify that the account doesn't yet exists (again, by checking against account_root
), and:
seed
to create an account with this ID (the seed would be provided via the advice provider).0
and the vault is empty (the code and the storage could be initialized to arbitrary values by the user).Another thing we'd need to change is how we derive account ID from a seed. The reason for this is that if an account is created via a public transaction (i.e., not proven locally), a malicious operator could steal the seed and create their own account with the same ID. So, what we want to do is bind the account ID to the code and storage with which the account is initialized. So, the procedure could look like this:
digest = hash(code_root, storage_root, nonce)
.account_id = digest[0]
and make sure it complies with various account ID rules (same as now).digest[3]
based on the account type: 24 for regular accounts and 32 for faucet accounts (same as now).Basically, the user would try different nonce values (nonce could be 1 field element) until all the rules are satisfied. If someone else wanted to get the same account ID for different code_root
and storage_root
combination, they'd need find a partial pre-image for the digest
. I believe the work required would be:
I think the above is probably more than sufficient because the attack window is very specific and short (i.e., once an account has been created, finding a different seed which results in the same account ID is meaningless).
Implement the transaction kernel we'll need for the Miden rollup.
Transaction kernel is the foundational component for executing and proving transactions. This issue is specifically for the single account transaction kernel - i.e., where each transaction touches only one account.
The tasks below involve finishing the kernel prologue, epilogue, and note setup script and implementing kernel user procedures.
@frisitano, @hackaugusto, @bobbinth, @grjte,
Coordinator: @frisitano
Note setup script is a program which run right before a note's script is implemented. For example, for a transaction which consumes a single note (note0
), the complete transaction MAST could look as follows:
And for a transaction consuming two notes (note0
and note1
), the complete transaction MAST could look as follows:
The note setup script is exactly the same for all notes, and it needs to accomplish the following tasks:
A CompiledTransaction
object is the result of compiling a set of note scripts in the context of an account. This process would be performed by a TransactionCompiler
and would look as follows:
Compiled transaction consists of the following components:
Component | Type | Description |
---|---|---|
Account ID | 1 element | Identifier of the account involved in the transaction. |
Consumed notes | Vec<Note> | A list of objects for all notes consumed in the transaction (if any). |
Tx script root | Digest | MAST root of the tx script for the transaction (if any). |
Tx program | Program | An executable program describing the transaction. |
Notice that CompiledTransaction
object does not actually contain any of the account data (except for Account ID). Thus, we assume that it will be passed to a component which can look up all required account data somewhere before processing the transaction further.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.