Giter Site home page Giter Site logo

meroscrypto / meros Goto Github PK

View Code? Open in Web Editor NEW
83.0 83.0 20.0 8.39 MB

An instant and feeless cryptocurrency for the future, secured by the Merit Caching Consensus Mechanism.

Home Page: https://meroscrypto.io

License: Other

Nim 56.26% HTML 0.05% Python 42.94% C 0.75%

meros's People

Contributors

dependabot[bot] avatar jromero avatar kayabanerve avatar kolbyml avatar quelklef avatar rikublock avatar vyryn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meros's Issues

Potential Pending Debt is bidirectional.

Setup:

  • Account[X] is an unverified Data.
  • Account[X + 1] is an unverified Send spending everything on the Account.
  • Account[X] then gets a Send spending everything..

On nodes which added Account[X + 1] before Account[X]'s Send, Account[X]'s Send will be rejected as it violates the potential pending debt. On nodes which added Account[X]'s Send before Account[X + 1], Account[X + 1] will be rejected as it violates the potential pending debt. Nodes who see Account[X]'s Send first will always follow the correct chain. Nodes who see Account[X + 1] first will never accept a Send at Account[X].

Mints and Sends can be claimed multiple time.

When adding a Claim. we check every Entry in the Account's seq to make sure this it a double claim. Since the DB addition, this seq only has the last 6 blocks of Entries, meaning every 6 blocks the same mint can be claimed again.

The solution is an table of [int, bool].

Review of PrivateKey.nim

Similar to #1, here is my review:

After the "newPrivateKey" procedure you should use secp256k1_ec_seckey_verify to ensure that it's a valid key.

C definition - https://github.com/status-im/secp256k1/blob/be6f5385330905bf1d7cc441be6703cfa7aef847/include/secp256k1.h#L506-L516

Nim wrapper:

proc secp256k1_ec_seckey_verify*(
  ctx: ptr secp256k1_context;
  seckey: ptr cuchar): cint {.secp.}

Usage in nim-eth-keys new API (MIT/Apache license):

proc newPrivateKey*(): PrivateKey =
  ## Generates new private key.
  let ctx = getSecpContext()
  while true:
    if randomBytes(result.data) == KeyLength:
      if secp256k1_ec_seckey_verify(ctx, cast[ptr cuchar](addr result)) == 1:
        break

proc initPrivateKey*(hexstr: string): PrivateKey =
  ## Create new private key from hexadecimal string representation.
  let ctx = getSecpContext()
  var o = fromHex(stripSpaces(hexstr))
  if len(o) < KeyLength:
    raise newException(EthKeysException, InvalidPrivateKey)
  copyMem(addr result, addr o[0], KeyLength)
  if secp256k1_ec_seckey_verify(ctx, cast[ptr cuchar](addr result)) != 1:
    raise newException(EthKeysException, InvalidPrivateKey)

proc initPrivateKey*(data: openarray[byte]): PrivateKey =
  ## Create new private key from binary data blob.
  let ctx = getSecpContext()
  if len(data) < KeyLength:
    raise newException(EthKeysException, InvalidPrivateKey)
  copyMem(addr result, unsafeAddr data[0], KeyLength)
  if secp256k1_ec_seckey_verify(ctx, cast[ptr cuchar](addr result)) != 1:
    raise newException(EthKeysException, InvalidPrivateKey)

Efficiency

result[(int) i / 2] = (uint8) parseHexInt($hex[i .. i + 1])

$ isn't needed, it's already a string. Slicing allocates a new string triggering the GC, hex.topOpenArray(i, i+1) would avoid copy/allocation. i / 2 should be i div 2, which will use the integer truncated division.

Serialization is sometimes causing uneeded sequences of 0's to appear.

An example is here:

207028D62D9FE120A5AC53B3D4D58F14BF11ABCC15E7A6B309873C5AD4F75E45B708000000000000000020BBC9D27CEA0D954550008775DAA3C750EA8E9CA8B580D4B50B0E53CCF17942AC0502540BE400080000000000000001409E4CA5E75B48AA0E6AE9F2280A10A2A03E4BCA311CE392736D92AE38CEAB3E17ADF5A7B822FB9FC2C27EDF0963C490CE7775F0D6FE04AFCF15E924E052B3DA05

The 0s are at the start of the field, and therefore mean nothing. Their inclusion also breaks hashing, as Nodes without this bug will serialize the data without the 0s and generate a different hash.

Anyone can trigger a MeritRemoval for anyone.

A MemoryVerification has a signature of the hash. This means any MemoryVerification will match for any nonce for the same Verifier. If a Verifier creates a signature for X at 0, and Y at 1, a third party claim X and Y were both at nonce 0, and trigger a removal.

Meros doesn't handle double verifies for competing Entries correctly.

As soon as Meros sees a verified Entry, it deletes all competing Entries.

If Meros has 100k Live Merit, the confirmation threshold is 50,601. If one Merit Holder has 10k Merit, there's 90k Merit in other parties. Assuming one Entry gets 46k Merit and one gets 44k, the single MeritHolder can trigger both as verified.

The following MeritRemoval will cause the live Merit to decrease to 90k. Then, the confirmation threshold is 45,601, and one Entry will be verified.

Meros will see one Entry as verified currently and delete the others. It will then fail to add the Verification (due to a separate issue) and fail to trigger a MeritRemoval. Once that separate issue is fixed, Meros will still not trigger a MeritRemoval.

The most secure option is that if competing Entries are detected, we remove instant Verification and force defaulting (one hour). That's still only secure if we see the competing Entry before we see the competing Verification. We still need to allow adding competing Entries once one is verified and only delete competing Entries when an Entry leaves Epochs.

For just slightly less security, if we see competing Entries, we can increase the verification threshold to be ~80%. Enough where the top two to five Merit Holders would need to collude and plan to lose all their Merit in order to cause multiple Entries at the same index to be temporarily verified.

This harms people who create competing Entries by accident, but thankfully, their subsequent Entries can still be instant. In order to prevent services from losing funds, we should have the RPC say how long an Entry has been added to the Database for, so services can add a threshold (2-5 seconds) of their choosing.

I hate the idea of a time delay, but a 5 second delay post-sighting is still only 5 seconds for a fully settled transaction. Not optimal, but perfectly feasible, even in real life. I would be personally comfortable with a three second delay.

Meros is compared a lot to Nano, and understandably so. That said, I want to comment now that the Meros base layer will likely never be faster than Nano, as it has increased checks/security. The goal of Meros is not faster; it's more secure with enough speed to be used in real life.

Review of SECP256K1Wrapper

Hey @kayabaNerve,

Here is my review of SECP256K1Wrapper.nim as of https://github.com/kayabaNerve/Ember/blob/f3da08adf7ef8d819416a9e87e1a4d07bfecd779/src/lib/SECP256K1Wrapper.nim#L17

General notes:

Make sure to check or Ethereum keys wrapper: https://github.com/status-im/nim-eth-keys/blob/master/eth_keys/libsecp256k1.nim

and also the old API that I wrote: https://github.com/status-im/nim-eth-keys/blob/master/old_api/backend_libsecp256k1/libsecp256k1.nim

(License MIT/Apache v2)

Note that both are untested yet and not audited.

secpPublicKey

https://github.com/kayabaNerve/Ember/blob/f3da08adf7ef8d819416a9e87e1a4d07bfecd779/src/lib/SECP256K1Wrapper.nim#L17-L21

Security

There should be a doAssert / check size of the secpPublicKey string.
Also Public Keys have several serialized representations depending if they start with a:

  • 0x04 (uncompressed - 64 bytes)
  • 0x02 or 0x03 (compressed - 33 bytes)
  • 0x06 or 0x07 (hybrid - 65 bytes)

You should use secp256k1_ec_pubkey_parse instead:

  • It will handle all those cases
  • secp256k1 operations are constant time (for division for example) and will protect the wallet against timing attacks.
  • The secp256k1_pubkey data structure is an array[64, byte] but what is inside is implementation defined. It can change depending on platform (x86, ARM), endianness and libsecp256k1 version.

https://github.com/status-im/secp256k1/blob/be6f5385330905bf1d7cc441be6703cfa7aef847/include/secp256k1.h#L281-L300

/** Parse a variable-length public key into the pubkey object.
 *
 *  Returns: 1 if the public key was fully valid.
 *           0 if the public key could not be parsed or is invalid.
 *  Args: ctx:      a secp256k1 context object.
 *  Out:  pubkey:   pointer to a pubkey object. If 1 is returned, it is set to a
 *                  parsed version of input. If not, its value is undefined.
 *  In:   input:    pointer to a serialized public key
 *        inputlen: length of the array pointed to by input
 *
 *  This function supports parsing compressed (33 bytes, header byte 0x02 or
 *  0x03), uncompressed (65 bytes, header byte 0x04), or hybrid (65 bytes, header
 *  byte 0x06 or 0x07) format public keys.
 */
SECP256K1_API SECP256K1_WARN_UNUSED_RESULT int secp256k1_ec_pubkey_parse(
    const secp256k1_context* ctx,
    secp256k1_pubkey* pubkey,
    const unsigned char *input,
    size_t inputlen
) SECP256K1_ARG_NONNULL(1) SECP256K1_ARG_NONNULL(2) SECP256K1_ARG_NONNULL(3);

So in memory you work with the libsecp256k1 representation, but you use the serialized version for storage or communication.

Example - new Eth-key API:

proc recoverPublicKey*(data: openarray[byte],
                       pubkey: var PublicKey): EthKeysStatus =
  ## Unserialize public key from `data`.
  let ctx = getSecpContext()
  let length = len(data)
  if length < RawPublicKeySize:
    setErrorMsg(InvalidPublicKey)
    return(EthKeysStatus.Error)
  var rawkey: array[RawPublicKeySize + 1, byte]
  rawkey[0] = 0x04'u8 # mark key with UNCOMPRESSED flag
  copyMem(addr rawkey[1], unsafeAddr data[0], RawPublicKeySize)
  if secp256k1_ec_pubkey_parse(ctx, addr pubkey,
                               cast[ptr cuchar](addr rawkey),
                               RawPublicKeySize + 1) != 1:
    return(EthKeysStatus.Error)
  result = EthKeysStatus.Success

proc initPublicKey*(data: openarray[byte]): PublicKey =
  ## Create new public key from binary data blob.
  if recoverPublicKey(data, result) != EthKeysStatus.Success:
    raise newException(EthKeysException, InvalidPublicKey)

or old API

proc parsePublicKeyWithPrefix(data: openarray[byte], result: var PublicKey) =
  ## Parse a variable-length public key into the PublicKey object
  if secp256k1_ec_pubkey_parse(ctx, result.asPtrPubKey, cast[ptr cuchar](unsafeAddr data[0]), data.len.csize) != 1:
    raise newException(Exception, "Could not parse public key")

proc parsePublicKey*(data: openarray[byte]): PublicKey =
  ## Parse a variable-length public key into the PublicKey object
  case data.len
  of 65:
    parsePublicKeyWithPrefix(data, result)
  of 64:
    var tmpData: Serialized_PubKey
    copyMem(addr tmpData[1], unsafeAddr data[0], 64)
    tmpData[0] = 0x04
    parsePublicKeyWithPrefix(tmpData, result)
  else: # TODO: Support other lengths
    raise newException(Exception, "Wrong public key length")

Performance:

From a performance point of view you should change (uint8) parseHexInt(pubKey[i .. i + 1]) to (uint8) parseHexInt(pubKey.toOpenarray(i, i+1)) because slicing will create a new seq allocated on the heap while toOpenarray provides a view. Note that toopenarray requires devel. In any case this doesn't really matter because the string should be passed to libsecp256k1 anyway.

secpSignature

https://github.com/kayabaNerve/Ember/blob/f3da08adf7ef8d819416a9e87e1a4d07bfecd779/src/lib/SECP256K1Wrapper.nim#L23-L32

Security

Similar to public key you should use secp256k1_ecdsa_recoverable_signature_parse_compact like in the new API:

proc recoverSignature*(data: openarray[byte],
                       signature: var Signature): EthKeysStatus =
  ## Unserialize signature from `data`.
  let ctx = getSecpContext()
  let length = len(data)
  if length < RawSignatureSize:
    setErrorMsg(InvalidSignature)
    return(EthKeysStatus.Error)
  var recid = cint(data[KeyLength * 2])
  if secp256k1_ecdsa_recoverable_signature_parse_compact(ctx, addr signature,
                                           cast[ptr cuchar](unsafeAddr data[0]),
                                                         recid) != 1:
    return(EthKeysStatus.Error)
  result = EthKeysStatus.Success

proc initSignature*(hexstr: string): Signature =
  ## Create new signature from hexadecimal string representation.
  var o = fromHex(stripSpaces(hexstr))
  if recoverSignature(o, result) != EthKeysStatus.Success:
    raise newException(EthKeysException, libsecp256k1ErrorMsg())

or the old one

proc parseSignature*(data: openarray[byte], fromIdx: int = 0): Signature =
  ## Parse a compact ECDSA signature. Bytes [fromIdx .. fromIdx + 63] of `data`
  ## should contain the signature, byte [fromIdx + 64] should contain the recovery id.
  assert(data.len - fromIdx >= 65)
  if secp256k1_ecdsa_recoverable_signature_parse_compact(ctx,
      result.asPtrRecoverableSignature,
      cast[ptr cuchar](unsafeAddr data[fromIdx]),
      cint(data[fromIdx + 64])) != 1:
    raise newException(ValueError, "Signature data is invalid")

Note: when trying to do a pure Nim secp256k1 compatible lib, in my tests I couldn't understand the in-memory ECDSA signature representation so just use libsecp256k1 proc.

Serialization of Public key and Signature

A serialized public key or signature is what we use "openly".

You should use the corresponding secp256k1_ec_pubkey_serialize and secp256k1_ecdsa_recoverable_signature_serialize_compact (for a recoverable 65 bytes signature) or secp256k1_ec_pubkey_serialize (for a 64 bytes signature). We use the recoverable signature for Ethereum.

New API

proc toRaw*(pubkey: PublicKey, data: var openarray[byte]) =
  ## Converts public key `pubkey` to serialized form and store it in `data`.
  var key: array[RawPublicKeySize + 1, byte]
  assert(len(data) >= RawPublicKeySize)
  var length = csize(sizeof(key))
  let ctx = getSecpContext()
  if secp256k1_ec_pubkey_serialize(ctx, cast[ptr cuchar](addr key),
                                   addr length, unsafeAddr pubkey,
                                   SECP256K1_EC_UNCOMPRESSED) != 1:
    raiseSecp256k1Error()
  assert(length == RawPublicKeySize + 1)
  assert(key[0] == 0x04'u8)
  copyMem(addr data[0], addr key[1], RawPublicKeySize)

proc getRaw*(pubkey: PublicKey): array[RawPublicKeySize, byte] {.noinit, inline.} =
  ## Converts public key `pubkey` to serialized form.
  toRaw(pubkey, result)

proc toRaw*(s: Signature, data: var openarray[byte]) =
  ## Converts signature `s` to serialized form and store it in `data`.
  let ctx = getSecpContext()
  var recid = cint(0)
  assert(len(data) >= RawSignatureSize)
  if secp256k1_ecdsa_recoverable_signature_serialize_compact(
    ctx, cast[ptr cuchar](addr data[0]), addr recid, unsafeAddr s) != 1:
    raiseSecp256k1Error()
  data[64] = uint8(recid)

proc getRaw*(s: Signature): array[RawSignatureSize, byte] {.noinit, inline.} =
  ## Converts signature `s` to serialized form.
  toRaw(s, result)

Old API Signature and Public Key

proc serialize*(s: Signature, output: var openarray[byte], fromIdx: int = 0) =
  ## Serialize an ECDSA signature in compact format, 65 bytes long
  ## (64 bytes + recovery id). The output is written starting from `fromIdx`.
  assert(output.len - fromIdx >= 65)
  var v: cint
  discard secp256k1_ecdsa_recoverable_signature_serialize_compact(ctx,
    cast[ptr cuchar](addr output[fromIdx]), addr v, s.asPtrRecoverableSignature)
  output[fromIdx + 64] = byte(v)

proc serialize*(key: PublicKey, output: var openarray[byte], addPrefix = false) =
  ## Exports a publicKey to `output` buffer so that it can be
  var
    tmp{.noInit.}: Serialized_PubKey
    tmp_len: csize = 65

  # Proc always return 1
  discard secp256k1_ec_pubkey_serialize(
    ctx,
    tmp.asPtrCuchar,
    addr tmp_len,
    key.asPtrPubKey,
    SECP256K1_EC_UNCOMPRESSED
  )

  assert tmp_len == 65 # header 0x04 (uncompressed) + 128 hex char
  if addPrefix:
    assert(output.len >= 65)
    copyMem(addr output[0], addr tmp[0], 65)
  else:
    assert(output.len >= 64)
    copyMem(addr output[0], addr tmp[1], 64) # Skip the 0x04 prefix

proc toString*(key: PublicKey): string =
  var data: array[64, byte]
  key.serialize(data)
  result = data.toHex

proc toStringWithPrefix*(key: PublicKey): string =
  var data: array[65, byte]
  key.serialize(data, true)
  result = data.toHex

Signing and verify

https://github.com/kayabaNerve/Ember/blob/f3da08adf7ef8d819416a9e87e1a4d07bfecd779/src/lib/SECP256K1Wrapper.nim#L34-L55

You should not convert to cstring for perf, you should do cast[ptr cuchar](hash[0].addr) or cast[ptr cuchar](hash[0].unsafeAddr) instead to avoid extra allocation, the hash does not need to be a var parameter

For verification, alternatively with recoverable signatures you can use secp256k1_ecdsa_recoverable_signature_parse_compact to retrieve the signature the message was signed with like here.

Test vectors

Please refer to nim-eth-keys tests and the eth-keys tests from the Ethereum Foundation repo

Tests in the tests.nim file https://github.com/status-im/nim-eth-keys/blob/master/tests/tests.nim are against the new API, all the others are for the old API (which resembles more the Ethereum Foundation tests).

DBDumpSample doesn't include Mints.

Title says it all.

The problem exists because Mints don't have obvious hashes. By looking at the Blockchain/Consensus data, it's possible to recreate Mints, yet that's annoying as hell. Claimed Mints can be grabbed rather easily though.

Reloaded Transactions DAG doesn't match a consistently live existing DAG.

We do not reload unarchived Elements due to the fact we don't save their signatures to the DB. As we reload Transactions based on Elements, we therefore don't reload Transactions not mentioned in a Block.

That said, we do save their UTXOs to the DB. When we add a TX spending an UTXO, we don't pull up the source TX; we just check the DB has the UTXO. Therefore, it's possible to reload Meros, without a specific Transaction in its cache, yet successfully add a Transaction spending an UTXO from said specific Transaction.

This isn't a problem on a live network, where the transaction will be downloaded with the next Block, yet has ramifications for solo nodes.

Server only accepts one Client.

On Windows, the Server will accept a Client, yet then synchronously handle it, never accepting more clients.

This can be proven by adding an echo statement after L30 of src/Network/Server.nim. This is believed to be due to the recent async changes to Nim, and have been introduced at some point 0.18 and 0.19.

`Address.toBN()` occasionally returns the wrong BN.

This may be an issue with Base32, as we have test vectors, but we don't do random testing. With addresses, we generate fresh Private Keys and test 20 instances.

I just tested ~400 addresses before I got it to trigger. That said, it can't ever happen.

Forcing an Entry to default via multiple Checkpoint voting.

Idea credit of @PlasmaPower.

Checkpoints allow multiple votes without penalty. Some Merit Holders may think a set of Blocks aren't malicious, while others will. Because of this, multiple potential Checkpoints may be issued, before one is finalized.

Right now, Entries decide to default after 6 blocks, and default at the checkpoint. If the fate of competing Entries is decided at a Checkpointed Block, and two competing blocks appear with two different outcomes for the competing Entries, two checkpoints will be created. If each get roughly half, a Merit Holder can tip one, and take advantage of the defaulted transaction. If they then tip the other, and provide a longer chain on the other checkpoint, they can reverse the transaction for an alternative.

Merit Holders need to coordinate before actually voting and provide an united front to stop this.

You can double mine Verifications.

This doesn't mean a miner can force a transaction to be verified. The Lattice does check this. It skews the EMB minting since it looks like they did more than they actually did.

The Lattice adds Verifications with the current state, but if those Verifications aren't mentioned in a Block, those Verifications' `current` state changes.

If a Verification for 2% of the Merit is added to an Entry, which isn't mentioned in the next Block, but the Block after next Block, the amount of Merit it represents changes. In the case it ends up only representing 1% of the Merit, live nodes will still say it represents 2%, when synced nodes will
(correctly) say it represents 1%.

When a Block comes in, or when an Entry is archived, we need to recalculate the Merit behind it. The first is more accurate but more expensive, while the second is less accurate but less expensive.

This does raise an issue where a verified Entry can become unverified. An Entry becomes verified when it has TotalMerit / 2 + 1 behind it, Updating that check to TotalMerit / 2 + 601 means that as long as the Entry is mentioned in the next block, which it will be with that much Merit behind it, it will remain verified.

Double spending funds.

This is the... partner bug to #28.

If you have 100 on your account, and create Sends at X and X + 1, both for 100 and both received before the on at X is confirmed, they'll be marked as valid, and since they're at different nonces, will both be confirmed assuming conflicting Entries aren't created (same nonce).

Ember uses GMP for numbers, so I believe this thankfully won't cause a negative overflow, yet it still allows doubling your coins if the timing is correct.

Verifications with unknown hashes breaks global consensus.

For a transaction to default, it needs to not have competitors and gain a minority of Merit. Verifications with unknown hashes look effectively identical.

When a Checkpoint comes around, we need to explicitly specify which transactions defaulted OR which Verifications look like defaults yet actually have unknown hashes. The second will likely produce a smaller output. If the Checkpoint misrepresents the state, the Merit Holders should not vote it in.

There is the risk that Merit Holders undo a transaction by marking the hash as unknown. Therefore, if an unknown hash does receive the majority of Merit, and the Checkpoint calls the hash unknown, it should be considered technically incorrect.

When it comes to undoing a defaulting transaction, this would happen instead of a transaction being defaulted.

This does raise the security risk where the majority of Merit Holders could censor transactions. This is not a security risk when the majority simply don't verifying a Transaction, as only one Verification needs to appear on chain for it to default. That said, the majority of miners can already stop the Verification from appearing on chain.

Meros doesn't used properly sized numbers.

Nonces, times, and proofs should all be uint32, as the protocol provides each value with a full 4 bytes. An int, for as long as we support 32-bit platforms, is only guaranteed to have 31 bits.

This affects:

  • Mint nonces.

  • Send/Data proofs.

  • Element nonces.

  • ConsensusIndex/MeritHolderRecord nonces.

  • BlockHeader time/proof/nonce.

  • Difficulty times.

  • Blockchain indexing/height.

On a similar note, our wide use of Naturals, for int with uint safety, means nothing since we've disabled checks.

You can add Verifications after an Entry's epoch.

All entries are sorted into a period of 6 blocks called an Epoch (the block it fist appeared in; it's one-in, one-out). When you add a Verification, it is sorted into the proper Epoch.

Once the Epoch is out of scope, because it's over 6 blocks old, Verifications can be resubmitted. This skews distribution and can cause orphaned Entries (Entries who didn't get enough Merit to be verified) to be verified which is a problem.

Difficulty never retargets.

The difficulty re-targeting code sets the new difficulty by looking at how many blocks were mined in the period and how many should have been mined. Now that we update the period not on time but on when the expected block count was reached, this means the difficulty algo thinks the difficulty is always performing perfectly.

Networking code breaks across different endians.

This is due to our raw bit access.

Big-endian is the network standard (and somethings will automatically flip the bit order for transmission, like Java), but most CPUs are little-endian (x86; ARM is both but most OSs are little-endian).

This is not critical because it barely affects anyone. I rather just use little-endian for networking. I see no reason to convert to big-endian and back.

Node will double verify Entries.

The Node verifies every Entry it successfully adds to the Lattice. It'll successfully add conflicting Entries. Therefore, it verifies both.

If a funding source is spent in the same block, it cannot be synced.

We grab every Verification, then grab every Entry, then add Every Entry, then sync every Verification.

A funding source (Claim/Receive) only is spendable after it's verified. When syncing, we add it, don't verify it, then try to spend it, which fails since it's not verified.

The order of Mints is decided by the order of Rewards which is partially decided by the Nim standard lib.

This is not a bug per say, but it makes building a node in almost any other language very difficult.

This also will cause future nodes, if Nim changes their algorithms, to not view existing valid data as valid.

Mints should be ordered as follows:

  • Greatest score first (which we currently do).
  • Highest key first in the event of a tie (which we do not do).

To be honest, Nim's algorithms may already do highest key first. We shouldn't rely on how Nim does it though, and should check/order ourselves.

You can double Claim/Receive/Send coins.

If you:

  • Create a Data, loaded into [0]
  • Create a Receive of Send X on the same Index. loaded into [1]).
  • Create a Receive of Send X on Index + 1, loaded into [0].
    Both Receives have a chance to be Verified, which means coins can be minted improperly.

It's because we check for existing Receives in the 0 index, as any other index means the Entry is unconfirmed. That said, we don't check again later for issues if it ends up being confirmed.

Syncing crashes both Nodes.

The one that's syncing can't find a method in the EventEmitter (null access).

The one that's synced is throwing [AsyncError] Couldn't handle a Client.. This is part of a bigger bug, outside of syncing, and likely due to our new recv proc.

Certain Receives will crash the entire Network.

Adding a new Send.
Successfully added the Send.
B260148C4B036AC37E3BBB78E89F295E950B712AE2720B9500EAD47C02FD954A58E910579ADECAA791AE4350AA5DB15BD2BC92863CB3C150161A91F0116FE072 was verified.

Adding a new Send.
Failed to add the Send.

Adding a new Verification.
B260148C4B036AC37E3BBB78E89F295E950B712AE2720B9500EAD47C02FD954A58E910579ADECAA791AE4350AA5DB15BD2BC92863CB3C150161A91F0116FE072 was verified.
Successfully added the Verification.
Adding a new Verification.
Failed to add the Verification.
Successfully added the Verification.
Adding a new Verification.
Failed to add the Verification.
Successfully added the Verification.
Adding a new Verification.
B260148C4B036AC37E3BBB78E89F295E950B712AE2720B9500EAD47C02FD954A58E910579ADECAA791AE4350AA5DB15BD2BC92863CB3C150161A91F0116FE072 was verified.
Successfully added the Verification.
Adding a new Verification.
Failed to add the Verification.
Successfully added the Verification.
Adding a new Receive.
Successfully added the Receive.
9518B89EC26EC021DF4587F9FC95D3F7575B9EE758325F0D251D1E37CA1E40630A1433C478C734557FE9865EB40164AAB7D5B9AE9BC93A553F02F606088B4431 was verified.

Adding a new Verification.
Failed to add the Verification.
Successfully added the Verification.
Adding a new Receive.
Traceback (most recent call last)
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/concurrency/threadpool.nim(337) slave
/mnt/c/Users/lukeP/Desktop/Ember/src/main.nim(24) mainWrapper
/mnt/c/Users/lukeP/Desktop/Ember/src/main.nim(23) main
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/asyncdispatch.nim(1649) runForever
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/asyncdispatch.nim(1514) poll
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/asyncdispatch.nim(1280) runOnce
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/asyncdispatch.nim(189) processPendingCallbacks
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/asyncmacro.nim(39) handle_continue
/mnt/c/Users/lukeP/Desktop/Ember/src/Network/Clients.nim(301) handleIter
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/asyncmacro.nim(307) :anonymous
/home/lukep/.choosenim/toolchains/nim-0.19.0/lib/pure/asyncmacro.nim(36) anonymous_continue
/mnt/c/Users/lukeP/Desktop/Ember/src/Network/Network.nim(147) anonymousIter
/mnt/c/Users/lukeP/Desktop/Ember/src/MainLattice.nim(190) :anonymous
/mnt/c/Users/lukeP/Desktop/Ember/src/Database/Lattice/Lattice.nim(109) add
/mnt/c/Users/lukeP/Desktop/Ember/src/Database/Lattice/Account.nim(153) add
/home/lukep/.nimble/pkgs/finals-1.0.0/finals.nim(132) address
SIGSEGV: Illegal storage access. (Attempt to read from nil?)

When we sync things, if the Client doesn't respond properly, the error raises all the way up and the node will crash.

Example call stack:

Error: unhandled exception: Client didn't respond properly to our VerificationRequest.
Async traceback:
  ~/.choosenim/toolchains/nim-0.19.4/lib/pure/concurrency/threadpool.nim(337) slave
  ~/Documents/GitHub/Meros/src/main.nim(29)                                   mainWrapper
  ~/Documents/GitHub/Meros/src/main.nim(28)                                   main
  ~/.choosenim/toolchains/nim-0.19.4/lib/pure/asyncdispatch.nim(1651)         runForever
  ~/.choosenim/toolchains/nim-0.19.4/lib/pure/asyncdispatch.nim(1516)         poll
  ~/.choosenim/toolchains/nim-0.19.4/lib/pure/asyncdispatch.nim(1282)         runOnce
  ~/.choosenim/toolchains/nim-0.19.4/lib/pure/asyncdispatch.nim(191)          processPendingCallbacks
  ~/.choosenim/toolchains/nim-0.19.4/lib/pure/asyncmacro.nim(36)              syncVerification_continue
  ~/Documents/GitHub/Meros/src/Network/Client.nim(234)                        syncVerificationIter
Exception message: Client didn't respond properly to our VerificationRequest.
Exception type: [InvalidResponseError]
Error: execution of an external program failed: './build/Meros '

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.