Giter Site home page Giter Site logo

sips's Introduction

SIPs Discord Twitter Follow

Synthetix Improvement Proposals (SIPs) describe standards for the Synthetix platform, including core protocol specifications, client APIs, and contract standards.

Contributing

  1. Review SIP-1.
  2. Fork the repository by clicking "Fork" in the top right.
  3. Add your SIP to your fork of the repository. There is a template SIP here and a template STP here.
  4. Submit a Pull Request to Synthetix's SIPs repository.

Your first PR should be a first draft of the final SIP. It must meet the formatting criteria enforced by the build (largely, correct metadata in the header). An editor will manually review the first PR for a new SIP and assign it a number before merging it. Make sure you include a discussions-to header with the URL to a new thread on research.synthetix.io where people can discuss the SIP as a whole.

If your SIP requires images, the image files should be included in a subdirectory of the assets folder for that SIP as follow: assets/sip-X (for sip X). When linking to an image in the SIP, use relative links such as ../assets/sip-X/image.png.

When you believe your SIP is mature and ready to progress past the Draft phase, you should reach out to a Spartan Council member on discord by searching members with the "Spartan Council" role or finding them within the #governance channel. The Spartan Council will schedule in a call with the SIP author to go through the SIP in more detail.

Once assessed, a SIP is moved into Feasibility and a Core Contributor is assigned. The Core Contributor will work with the author to conduct a feasibility study. Once the Author and the Core Contributor are satisfied, a SIP is moved to SC Review Pending. Once the Spartan Council has formally reviewed the SIP during the SIP presentation they can either move it to a vote or send it back to Feasibility. A vote is conducted within the spartancouncil.eth snapshot space connected on the staking dApp. If a vote by the Spartan Council reaches a super majority, the SIP is moved to Approved, otherwise it is Rejected.

Once the SIP has been implemented by either the protocol DAO or the SIP author and relevant parties, the SIP is assigned the Implemented status. There is a 500 sUSD bounty for proposing a SIP that reaches the Implemented phase.

SIP Statuses

  • Draft - The initial state of a new SIP before the Spartan Council and core contributors have assessed it.
  • Feasibility - a SIP that is being assessed for feasibility with an assigned Core Contributor
  • SC_Review_Pending - a SIP that is awaiting a Spartan Council Review after the Author and Core Contributor are satisfied with feasibility
  • Vote_Pending - a SIP that is awaiting a vote.
  • Approved - a SIP that has successfully reached a super majority Spartan Council vote in favour.
  • Rejected - a SIP that has failed to reach a super-majority Spartan Council vote in favour.
  • Implemented - a SIP that has been released to main-net.

Validation

SIPs must pass some validation tests.

It is possible to run the SIP validator locally:

npm install (if not done already)
npm run test

JSON API

All SIPs & SCCPs data is available in JSON format by status at the following urls:

SIPs

https://sips.synthetix.io/api/sips/draft.json
https://sips.synthetix.io/api/sips/feasibility.json
https://sips.synthetix.io/api/sips/sc-review-pending.json
https://sips.synthetix.io/api/sips/vote-pending.json
https://sips.synthetix.io/api/sips/approved.json
https://sips.synthetix.io/api/sips/rejected.json
https://sips.synthetix.io/api/sips/implemented.json

SCCPs

https://sips.synthetix.io/api/sccp/draft.json
https://sips.synthetix.io/api/sccp/feasibility.json
https://sips.synthetix.io/api/sccp/sc-review-pending.json
https://sips.synthetix.io/api/sccp/vote-pending.json
https://sips.synthetix.io/api/sccp/approved.json
https://sips.synthetix.io/api/sccp/rejected.json
https://sips.synthetix.io/api/sccp/implemented.json

Automerger

The SIP repository contains an "auto merge" feature to ease the workload for SIP editors. If a change is made via a PR to a draft SIP, then the authors of the SIP can Github approve the change to have it auto-merged by the sip-automerger bot.

sips's People

Contributors

0xjac avatar 5chdn avatar andytcf avatar arachnid avatar axic avatar barrasso avatar burtrock avatar cavalier-eth avatar cdetrio avatar chriseth avatar fulldecent avatar garthtravers avatar gberg1 avatar gcolvin avatar jacko125 avatar jamesray1 avatar jjgonecrypto avatar kaiynne avatar kaleb-keny avatar mattlosquadro avatar milli3e avatar nicksavers avatar noahlitvin avatar pirapira avatar sorpaas avatar souptacular avatar terrabellus avatar wanderer avatar wighawag avatar zyzek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sips's Issues

Test Cases for SIP-210

Given that SIP-210 strongly resembles the original ETH Wrappr proposal, SIP-112, surely the test cases there could be readily ported over to the current SIP-210 draft?

Unless there's additional nuance I'm missing between the DAI wrapper and SIP-112.

Solutions To Snapshotters that Works on L1

I recently suggested an idea on Discord about a potential solution to snapshoters and wanted to formalize it here before moving to the SIP stage as to receive feedback from core-contributors. The solution aims at closing attack vector of minters, minting pre-fee closing, burning post-fee closing (after the waiting period) and claiming without contributing to the debt pool.

The basic premise of the solution revolves on withholding rewards until at least 1 fee period has passed, in case a user initiates a burn (not burn-to-target), the eSNX rewards are burned (or cleared from the record) and the sUSD rewards are sent to the fee pool. However, burn-to-target can be done without any user loss of rewards.

The implementation requires A structured hashmap where each user address maps to sUSD/eSNX rewards and the relevant feePeriodID.

Withholding of Rewards:
When a user initiates a claim, first check if the feePeriodID in the hashmap is the same as the current one on chain:

  • if it is the case, the sUSD/eSNX rewards earned by the address are updated in the hashmap and nothing is paid to the user (sUSD rewards are sent to a seperate witholding contract potentially).
  • if they aren't the same, (the situation where the user has claimed last week but hasn't yet received his rewards), then a transfer of sUSD rewards to the user takes effect (from the witholding contract) and the values in the hashmap are overwritten with the rewards of the current fee period.

Burning of Rewards on Unstake:
In case the user initiates a burn, the sUSD rewards are sent to the fee pool (from the witholding contract) and eSNX record is cleared from the hashmap (as if the user never claimed pre-snapshot).

Advantages:

  • Closes the snapshot attack vector
  • Is efficient in terms of gas than continuous rewards
  • Is simple enough that the user knows, rewards are withheld for 1 fee period
  • Allows to reduce waiting period further (just enough to kill off the front-minting attack vector)

Disadvantages:

  • User confusion on not receiving rewards on the first claim

Synthetix protocolDAO Phase 1 & 2

ProtocolDAO Phase Zero is now implemented with the transition from our EOA owner to a gnosis safe multisig. The safe is now the nominated owner of all mainnet contracts.

There are three SIPs required for this transition planned for the upcoming Hadar release:

Delegated Migrator - ProtocolDAO1
System pause functionality
Decentralised circuit breaker

The next phase of the ProtocolDAO will be implemented in the Hadar release later this month. The delegated migrator contract will take ownership of the mainnet contracts. The gnosis safe will own the delegated migrator. Any of the signers will be able to publish a new configuration to the delegated migrator contract which can be reviewed by the community to ensure that it aligns with the proposed changes in the SIPs included in that release. There will be a grace period before the changes can be implemented. Once the grace period expires the upgrade can proceed. This issue lays out a proposed spec for the control mechanisms for future contract upgrades in phase one and phase two. Phase two will introduce additional mechanisms to pass governance control onto the community to ensure the protocol improvements have reached sufficient consensus.

Specification:

  • All synthetix contracts have an owner
    • The contract pattern is logic, state and proxy
    • The owner can rewire logic contracts to the state and proxies for upgrades
  • New delegated migrator contract to take ownership of all contracts
    • Delegated migrator has an owner
      • is owned by ProtocolDAO
    • Delegated migrator can receive a proposed tx to rewire any contract
      • Rewire proposal is public function (maybe private initially)
  • ProtocolDAO owns delegated migrator
    • Member number is configurable
      • Modifiable by token vote (phase 2)
      • Quorum to be specified (phase 2)
    • Voting threshold is configurable
      • Modifiable by token vote (phase 2)
      • Quorum to be specified (phase 2)
    • After rewire proposal is submitted members vote to approve
      • Configurable grace period (initially 48 hours)
      • Modifiable after grace period
    • Token holders can veto proposal during grace period with x% of tokens voting no
      • Modifiable by token vote (phase 2)
      • Quorum to be specified (phase 2)
    • Delegated migrator can be upgraded
      • Requires token vote for approval (phase 2)
  • Any protocolDAO member can call pause
    • Delegated migrator has the ability to pause all contracts
    • Single member pause is 1 hour (configurable)
    • m/n required for indefinite pause
    • Token holders can vote to unpause with x% of tokens voting (phase 2)

We are investigating using an Aragon DAO in place of the gnosis safe for phase one, but most likely this will be implemented in phase two, using Aragon will enable the functionality above and potential future functionality to be introduced in later upgrades.

Rethinking frozen iSynths

iSynths are currently frozen the first time a rate comes in to the ExchangeRates contract via the Synthetix Oracle that hits, or is beyond, either of its limits. The price remains fixed - or frozen - at that limit and remains that way until a purge and price reset from the protocolDAO. This technique won't work with the upcoming migration to Chainlink Phase 2, and needs a replacement.

SIP-61 has been proposed to replace the entire freeze, purge and reset actions with keeper functions, but in its current form does potentially open up attack vectors. Another option of addressing the need to purge and reprice is to replace these functions with a version of Fee Reclamation SIP-37 to track the price diff the user owes. This is much cleaner but still presents the problem of how to freeze iSynths when oracle pricing is decentralized.

Background

The current ExchangeRates contract provides rates pushed from the Synthetix centralized oracle and rates pushed from decentralized oracles onto various Chainlink Aggregator contracts that ExchangeRates reads from. The former is being phased out completely in favor of the latter in SIP-36 and this presents a problem - how do we flag an inverse synth as having reached its limit? We currently perform this in ExchangeRates when the centralized oracle pushes a price (as you can see here), but with decentralized oracles, we cannot hook into their price push transactions the way we can with the centralized oracle.

The frozen flag is what we use within the protocol to indicate that no more price updates will be received for the iSynth until it is purged of holders and the pricing bands reset. Currently users can still exchange in and out of a frozen iSynth, though exchanging into a frozen iSynth is not profitable as the price is fixed and they will eventually have to exchange out again (or be purged) and pay a fee.

That being said, even without the frozen flag, the ExchangeRates contract has to ability to limit the prices for iSynths that are above or below the limits by simply capping the price at the limit that was breeched. For example, let's say iETH has a $200 entryPrice, an upper limit of $300 and lower of $100. When the price of sETH is $350 say, then iETH would be max((2 * 200) - 350, 100) and thereby effectively frozen at 100.

So, given all that, do we still need the frozen flag and if so, how can we decentralize the enabling of it?

Option 1: Remove the frozen flag altogether

We could opt to remove the frozen flag on a synth price completely. Removing the flag altogether reduces the complexity of this SIP considerably but has a few implications.

The primary concern is how to determine, on-chain, that the price has indeed gone outside of the limits in the past if it is currently within it's limits. In other words, in the above example if sETH returns to $250 after temporarily hitting $350, then without the frozen flag the rate of iETH would drop to max((2 * 200) - 250, 100) which is $150.

The flag itself is useful for a few reasons:

  1. It is a simple indicator for users to read on and off-chain that no more price updates will occur, until a purge and reset.

  2. It allows the system to prevent an iSynth that has received a price outside its limits, from ever getting another price until it is reset.

  3. The purge action, currently performed by the protocolDAO, can check for it as it currently does to allow system-level purging regardless of how much supply is in circulation.

Possible Mitigations

Without the flag, there are still some ways to mitigate these concerns:

  1. We can look back on-chain using the roundId incrementer in ExchangeRates for each price provider until the iSynth was created or last repriced, and determine if it was ever outside its limits. The major downside to this approach is it is gas intensive for any on-chain lookup, and could potentially be too gas intensive for a single transaction. We'd need to cap it to looking back the last N rounds, and how big N is depends on how much gas to spend looking back. While this is probably manageable from a purging perspective (as it's currently managed by the protocolDAO), it would be too expensive for any contract to query this on-chain.

  2. Alternatively we could simply allow iSynths to reactivate if the price returns to within the bounds. Because it is frozen outside the bounds, there is no advantage to exchanging into it as the only way price action will occur is if the price returns to back within the bounds, which is less and less likely the further from the bounds it is. However, the concern here is that when the protocolDAO purges, the price could have again returned to within the bounds, and so on-chain the check for frozen would fail and purge would be blocked. In that case either a) we prevent purging in those instances, requiring the protocolDAO wait until the price goes out of bounds again or b) we allow purging of an iSynth regardless of it being frozen. The former is probably an acceptable compromise as long as purging/resetting can be done in a timely manner, the latter however is probably too concerning for iSynth holders knowing that they could be purged at any time. As for usability, it's arguable that not having iSynths frozen and allowing them to regain pricing if they return to in-bounds pricing is useful. It means that if iSynths momentarily hit their limits, they could still remain useful without requiring the friction of a purge and reset.

Option 2: Bundle frozen into exchanges & transfers

Like Fee Reclamation handles settlement within subsequent exchanges, we could amend any exchange in (unlikely as the price will be capped already from ExchangeRates) or out of an iSynth or a transfer of it to check the current rate in ExchangeRates and if outside the limit, then to enable the frozen.

Concerns

  1. Gas. This will cost the user who is transacting successfully to exchange or more gas, but shouldn't be more than about 50k, and only for the first user who transacts with that iSynth once it is outside the range.

  2. If no user activity happens during a time when the price is outside of the range, and if the price returns to the range, frozen will not be applied.

Option 3: Incentivize freeze

As with Option 2 above, but freeze would be incentivized with SNX, say, as per a keeper function as per the freezing in SIP-61.

Proposal

So either we remove the frozen flag and accept that an iSynth, while always capped within its upper and lower bounds, can still yet move again once hitting its bounds, yet only allow purging when the current price is out of bounds; or we bundle freezing into exchange/transfers and potentially look incentivize it if there's concern it won't be frozen as soon as an out-of-bounds price appears.

Synth exchange suspension for protocol security

In order to mitigate unknown market manipulation attacks, such as the recent SPOT market manipulation of MKR, we need a mechanism to manually suspend (and potentially resume) exchanges into specific synths. The current solution involves updating the centralized Synthetix oracle to post a fixed price, which will no longer work once we fully migrate to Chainlink (https://sips.synthetix.io/sips/sip-36).

SNX Auction Trial

As per recent discord discussions the current arb pool is less efficient than it could be and has poor incentive alignment. An alternative mechanism I proposed was to hold a weekly SNX auction where SNX escrowed for 12 months is sold at a discount in a dutch auction. The proceeds of this auction will be used to buy sETH from the Uniswap sETH pool putting upward pressure on the peg. In order to determine whether this mechanism is viable I am proposing to run a trial at Thursday 23:00 UTC time (10am AEST on Friday).

  1. 100k SNX will be auctioned
  2. The trial will run for 2.5h
  3. The SNX price in ETH will be fixed at the opening of the auction
  4. The initial discount will be 2%
  5. Every 30 mins the discount will increase by 2%
  6. The maximum discount will be 10%
  7. All orders will be filled at the highest discount rate

To illustrate, if a bid for 80k SNX is placed at 2% and then 0 bids at 4% and 6% but then a bid for 20k SNX is placed at an 8% discount all SNX will be sold at the 8% discount. If there is an excess of demand at a specific discount level the initial bids at earlier levels will all be filled and then the final discount level will be distributed pro rata.

This manual trial will require a level of trust as discounts will be calculated manually and SNX will be distributed at the close of the auction, but the ETH must be sent during the auction to confirm and lock in a bid. If the trial is successful a contract will be written to automate this process later. To provide an additional level of assurance a gnosis multisig will be used as the address to receive the proceeds with a trusted community member acting as a second signer.

Arb Pool Upgrade Proposal

Abstract

So right now our arb pool is working OK, but it is a bit inefficient in the way it utilizes capital and there are concerns that it might not be able to stand up to acute system stressors. This proposal on how we can improve the arb pool consists of three parts which I'll break down separately:

  1. Modification of Arb Loop
  2. Dynamic Arb Incentives
  3. sETH Pool Migration

1) Modification of Arb Loop

Situation - The way the current arb contract works is that it allows users to buy SNX with ETH proportional to the current sETH:ETH peg rate if the peg is below 0.99. In the backend, the arb contract converts the ETH to sETH and provides newly minted SNX to the arber. Based on this current design, we're essentially conducting a dilutive share offering every week - even though new SNX is being minted, 99% of the value of that SNX will eventually go back to SNX holders in sETH form. Therefore we aren't leaking much value out of the system compared to Uniswap LP rewards where the SNX system does not receive any assets in return for the airdrops.

The problem with this model is that its not an efficient use of inflation. Assume for example the following scenario: A new fee period starts with 72,000 newly minted SNX dropped into the arb contract. One trader buys out the entire contract with an average peg price of 0.99. Assuming SNX = 0.01 ETH, he put 712.8 ETH worth of buying pressure on the sETH/ETH pool for this trade.

Solution - Imagine if instead of receiving SNX from the contract, the trader receives sETH at a 1:1 rate to ETH. They then receive a proportional amount of SNX based on the sETH discount they bought in at. So if a trader were to put in 712.8 ETH @ 0.99 peg, they would receive 712.8 sETH + 712.8 SNX.

WOW! The trader still receives 1% arb and you get the same net effect on the sETH/ETH pool with only 1% of the inflation required in the current model. Overall, the NET value outflows from the system that SNX holders bear are the same in both models.

Benefits:

  • Positive for synth supply (old system essentially lowers synth supply based on how much sETH is used for arb)
  • Significantly lower inflation rate (helps us hit the 2-4% total inflation target)
  • Solves the problem of figuring out what we do with leftover sETH in contract

2) Dynamic Arb Incentives

Situation - Right now the arb rewards keep the peg within a range of 0.97 to 1.00, with a tendency to fall off towards the end of the week. Noone is raising large concerns about this right now, but if we wanted to attract any serious traders to the platform, they should have absolute confidence that they should be able to enter and exit out of synths 1:1 almost all the time.

Another problem is that we don't have any disaster protection, we are using 72,000 SNX per week which looks like it will always get quickly drained. If the pressure on the peg increases even a little bit from the current state, we could see the peg deviating very widely (5%+) or see minor peg deviations (2%-4%) for sustained periods - both are bad.

Solution - Adjust the amount of arb incentives supplied every week depending on how off the peg was on average the previous week.

My proposal is to have a base 5,000 SNX dedicated to the contract every week. For every 0.001 deviation from 0.99, the arb incentive increases by 10% with a cap at 0.95. The reasoning is that the more we are off the peg, the more dire the situation is, and the more resources we should be using.

Here's a graph of how the numbers look and what % of inflation is reflected (based on 150M SNX Supply & 250M SNX Supply)

image

3) sETH Pool Migration

Situation - The arb pool is currently being exploited via large sETH holders smashing the peg of the sETH pool. However, they are currently limited by the fact that it is hard to sell large SNX positions that they gain from the sETH sale. The modification of the sETH loop to a more capital efficient manner as proposed above makes it easier for the arb pool to be exploited through sandwich attacks. Uniswap trade fees are static (0.30%) and don't protect against sandwich attacks which only costs attackers 0.60% total.

Solution - Fork the uniswap contract and implement the CLP formula for liquidity pools. In addition, set a minimum base fee of 0.30%. The CLP formula replaces the fixed rate fee with a variable fee based on how much slippage a trade results in. This makes sandwich attaks prohibitively expensive.

Given a pool with assets X and Y, and an input x and output y the CLP formula is:

y = (x * Y * X) / (x + X)^2 where P = X/Y.

More info here: [(https://medium.com/thorchain/thorchains-immunity-to-impermanent-loss-8265a59066a4)]

stake snx

i stake 53 snx one year in synthetix but now is 0

0xC5250939aB0E1aAF6680D54A2b2154e59a43871e

eSNX an ERC-20 token redeemable for escrowed SNX

As part of our ongoing trial of the SNX auction mechanism we need a way to automatically distribute escrowed SNX. This has previously been either manually administered or implemented at the protocol level, eg. SNX staking rewards. However, this creates a lack of flexibility in how escrowed SNX is distributed. This issue proposes a contract that enable three functions:

  1. minting eSNX (private function) - as part of the Synthetix Inflationary supply the RewardsDistribution contract will be configured to send an amount of SNX to this eSNX Contract and then call RewardsDistributionRecipient.notifyRewardAmount(amount) so eSNX can mint eSNX the equal amount of eSNX for SNX then send the SNX to the RewardsEscrow contract and send the eSNX minted to the configurable(ownerOnly) auction contract address.

  2. minting eSNX (public function) - after calling SNX.approve(amount, eSNXAddress) anyone can call this function with an amount of SNX to transfer it to the escrow contract and mint eSNX and send back to the caller so they can sell escrowed SNX.

  3. eSNX redemption (public function)- sending eSNX to the contract will burn eSNX and create a vesting entry in the escrow contract.
    Note: this is not currently possible other than some access given via the FeePool

This will allow for a transferable version of escrowed SNX that can be used to incentivise beneficial actions for the protocol. This eSNX will not function as collateral and can only be converted to SNX once and will then be locked to the account it is assigned to.

Synth exchange pausing for spot market closures

From the announcement of transitioning to Chainlink oracles for FX currencies:

Given FX markets close over the weekend, and there is currently no mechanism to pause trading, there is a small chance of a positive black swan event if a fiat currency is expected to experience major price appreciation over a weekend. The Synthetix team is aware of this possibility, and is investigating solutions.

FX currencies typically don't move much over a weekend, but it is possible, and it's likely to become more of an issue when synthetic stocks are launched in the future. For this reason, it's important to start a conversation about what mechanism/s can be built into the Synths to combat this.

Proposal: Fee Reclamations

Problem

Refer to "Problem" definition via Synthetixio/synthetix#298 for details on the front-running situation in Synthetix.

The Exchanging Queuing Proposal suggests solving front-running with the use of queue, to be processed at some future stage after prices have been received. The concern with this approach is that it turns a synchronous process - Synthetix.exchange() - into an ansynchoronous one. That is, when Synthetix.exchange() completes, the account will not yet have synths, and has to wait for the queue to be processed at a later block before they appear in the user's balance. As mentioned in the issue, this breaks composibility; any smart contract that tries to do a Synthetix.exchange() immediately followed by another action that relies on the balance being there will fail.

Proposal

Instead of using a queue, and instead of using the current max gwei limitation introduced in SIP-12, we allow all exchanges to be processed at whatever gas price the user wishes. Immediately thereafter, there is a small waiting period (of M minutes, say) within which exchanging or transfering any of that synth will fail for the user.

After the waiting period, if a price was received by the oracle during that window that would have led to more profit than the exchange fee, then this profit is marked as reclaimable. The next exchange of that synth after the waiting period will always send any reclaimable amount to the fee pool before processing the exchange. For transfers, in order to not break ERC20 conventions, if there is any reclaimable on that synth, instead of reclaiming the fees during transfer we propose to always fail the transaction (not unlike when a user attempts to transfer locked SNX). The onus is then user to invoke a function on the synth to repay their reclaimable amount - if any.

Workflow

  • Synthetix.exchange() invoked from synth src to dest by user for amount

    • Are we currently within a waiting period for src?
      • Yes: ❌ Fail the transaction
      • No
        • Does user have any reclaimable on src?
          • Yes
            • Is amount > reclaimable?
              • Yes: ✅ Invoke Synthetix.settleFeesReclaimed(src) for synth src then continue to exchange amount - reclaimable
              • No: ❌ Fail the transaction
          • No: ✅ Proceed with exchange as usual
  • Synthetix.settleFeesReclaimed(src) invoked with synth src by user

    • Does user have any reclaimable on src?
      • Yes: ✅ Then send reclaimable to the fee pool and set reclaimable to 0
      • No: ❌ Fail the transaction
  • Synth.transfer() invoked from synth src by user for amount

    • Are we currently within a waiting period for src?
      • Yes: ❌ Fail the transaction
      • No: Does user have any reclaimable on src?
        • Yes: ❌ Fail the transaction
        • No: ✅ Proceed with transfer as usual

Concerns

1. Legitimate users have to pay front-running fees

While this post-trade fee reclamation is designed to prevent intentional front-running, it is possible that legitimate users get caught out unintentionally front-running. That is, someone is trading unawares of market volatility, and they happen to place a trade right before an oracle update yields them additional profit. However, this proposal only penalizes them for these immediate profits made. Future oracle updates after this waiting period of M minutes have no impact.

2. Friction to exchange & transfer synths

Under this proposal, two points of friction are introduced:

  1. Exchanging a synth soon after it was exchanged into will fail (if less than M minutes). This is a suboptimal user experience, but it's arguably better than the max gas price solution (SIP-12) which impacts all trades. Instead, this limitation only strikes exchanges out of a synth that was recently traded into.

  2. Transferring a synth could now fail if there are any reclamable fees owing. While this introduces extra friction to the Synthetix ecosystem, it is not dissimilar to locked SNX that cannot be transferred. For the sake of protecting the system against front-runners making sprofit from SNX stakers, we propose that this is a worthwhile tradeoff.

3. Limitations to composability

The final concern with this approach is that any atomic (i.e. within the same transaction) exchanges or transfers that proceed an exchange will fail - causing the entire transaction to fail. As more an more DeFi projects are composed together in smart contracts, we have legitimate concerns that this friction could impede future compositions which have yet to be created.

We have some ideas around how to prevent this - such as a additional exchange function that circumvents this logic but always pays higher fees. Alternatively, we can take into account on-chain volatility of the src synth and if below some volatility threshold (say 50bps) over some time window (say 24 hours), allow any exchange of src to bypass the reclamation fee process.

We welcome any constructive input you may have around this issue.

Examples

Given the following preconditions:

  • Jessica has a wallet which holds 100 sUSD and this wallet has never exchanged before,

  • and the price of ETHUSD is 100:1, and BTCUSD is 10000:1

  • and the reclamation fee waiting period (M) is set to 3 minutes.


    When

    • she exchanges 100 sUSD into 1 sETH.

    Then

    • ✅ it succeeds as sETH has no reclamation fees for this wallet.

    When

    • she exchanges 100 sUSD into 1 sETH
    • and she immediately attempts to transfer 0.1 sETH

    Then

    • ❌ it fails as the waiting period has not expired

    When

    • she exchanges 100 sUSD into 1 sETH
    • and she immediately attempts to exchange 1 sETH for sBTC

    Then

    • ❌ it fails as the waiting period has not expired

    When

    • she exchanges 50 sUSD into 0.5 sETH.
    • and she immediately attempts to exchange 50 sUSD into 0.005 sBTC

    Then

    • ✅ it succeeds as sBTC has no reclamation fees for this wallet

    When

    • she exchanges 100 sUSD into 1 sETH (paying a 30bps fee)
    • ⏳ and 2 minutes later the price of ETHUSD goes up to 100.25:1
    • ⏳ and another minute later she attempts to transfer this sETH

    Then

    • ✅ the transfer succeeds because the profit made from the oracle update is less than the fee she already paid

    When

    • she exchanges 100 sUSD into 1 sETH (paying a 30bps fee)
    • ⏳ and 1 minute later the price of ETHUSD goes up to 103:1
    • ⏳ and 2 more minutes later she attempts to transfer any of this sETH

    Then

    • ❌ the transfer fails because she profited 3% - 0.3% = 2.7%.

    When

    • she exchanges 100 sUSD into 1 sETH (paying a 30bps fee)
    • ⏳ and a minute later the price of ETHUSD goes up to 103:1
    • ⏳ and 2 more minutes later she invokes settle for sETH
    • and immediately transfers this sETH to another wallet

    Then

    • ✅ the transfer succeeds as the prior settle invocation sent 2.7% of her exchange amount (0.027) to the fee pool, and transfer detected no reclaimable fees remaining.

    When

    • she exchanges 100 sUSD into 1 sETH (paying a 30bps fee)
    • ⏳ and a minute later the price of ETHUSD goes up to 103:1
    • ⏳ and 2 more minutes later she attempts to exchange 1 sETH for sBTC

    Then

    • ✅ the exchange succeeds, sending 2.7% of her exchange amount (0.027 sETH) to the fee pool, and converting the rest into sBTC (minus the exchange fee).

    When

    • she exchanges 100 sUSD into 1 sETH (paying a 30bps fee)
    • ⏳ and no oracle update for ETHUSD occurs after 3 minutes
    • ⏳ once 3 minutes from exchange have elapsed she attempts to exchange

    Then

    • ✅ the exchange succeeds and no reclaimable fee is charged

Alternative arbitrage pool mechanism

One of the mechanisms Synthetix uses to support a tighter peg is the arb pool. This pool receives 5% of the weekly SNX inflation. If the ETH/sETH ratio in uniswap falls below .99, ether can be sent to the arb pool where it is used to purchase sETH via the uniswap sETH pool. This buying pressure lifts the peg, and in return the pool distributes SNX to the wallet that sent the ether at a discount proportional to the ETH/sETH ratio. For more info see: https://synthetix.community/docs/arb-pool or Synthetixio/synthetix#188.

The issue is that the intent was to provide discounted SNX for restoring the peg, but this incentive alignment is weak as bots are now automatically closing the arbitrage loop through the SNX/ETH uniswap pool to lock in the discount. While this is not a problem in principle it does expose the system to gaming. The specifics of how the bots are gaming the pool will be provided by Nocturnal in a comment below.

This issue proposes a new mechanism with longer term alignment, where SNX is still deposited into the arb pool but is now purchased directly with ether at a discount proportional to the current discount in the ETH/sETH pool on uniswap, but this SNX is escrowed for 12 months.

The ether proceeds from the SNX purchases are immediately used to purchase sETH via the uniswap pool putting upwards pressure on the peg. The advantage to this mechanism is it rewards arbitragers with a longer time horizon, but it also removes the ability to immediately close the arbitrage loop and game the pool.

Now even if a bot is selling into the uniswap sETH pool to artificially lower the ratio in order to access discounted SNX, they must be prepared to accept the slippage from this sETH sale and to hold the SNX longer term. This is less efficient from an arbitrage perspective but should still be more than sufficient to restore the peg while being less gameable.

Draft SIP: SNX Inflationary Rewards Schedule

Introduction

Some community members (including me) have argued that the current SNX inflationary rewards schedule introduces some unnecessary risk into the system. There are obviously major pros and cons to changing the inflationary rewards schedule, and although I’ll try provide a balanced opinion, the following proposal mainly focuses on the argument for (which I was given the opportunity to articulate in the last governance call at ~ min 14:30). As a result, the following document will seem bias.

Key arguments for and against

Pros: key argument for changing schedule

When there is a rewards halvening for Bitcoin miners, who own mining hardware, they either like the new return profile for mining and keep mining, or they don’t and go mine something else. They probably don’t dump/sell their mining equipment and if they did, it would not affect the price of BTC (much).

With SNX, when then is a rewards halvening, if mintrs no longer like the rewards profile and decide to stop minting, then the only rational move is to sell your SNX.

And given that we know the exact date this event is happening, from a game theory perspective, all you need to know is that at least 1 whale is going to sell at this point, and a rational decision is to front run them by selling before they sell, and so on. Under these conditions you could end up essentially having a run on the bank around a rewards halvening which could crash the network. For a protocol which is designed to engender trust and stability, this is not ideal, especially if it is relatively trivial to prevent such a shock to the system.

Community member @gmgh’s comments in discord articulate some of the possible effects that such a shock could have:

For example:

  • minters packing up at the same time
  • synth supply shrinking
  • SNX unlocking to be sold down
  • SNX price dropping
  • sETH LPs getting their income halved and also now dropping in value
  • sETH LPs exiting by withdrawing and converting sETH to ETH
  • sETH getting smashed out of peg
  • arb pool being unattractive as SNX drops relative to ETH

Imo a smooth inflation reduction solves this by removing an "event" to trigger this people will just come and go and do all the above actions based on their own accord, rather than being "forced" to take action in response to an obvious future event. A lot of the game theory of what other people will do, and when they do it, will be removed.

The following proposal articulates an alternative approach to reducing the inflationary emissions, that achieves approximately the same total emissions, without the step change functions that exist today.

Ongoing inflation

There has also been some talk of introducing ongoing inflation to the system long term so that the foundation has an additional incentive lever beyond exchange fees, which may ebb and flow somewhat and may not grow as quickly as needed to sufficiently replace inflationary rewards. This proposal also includes a proposal for how to introduce ongoing emissions.

Cons: key arguments against changing

The Synthetix network is built on trust. Community members and investors alike need to know that the inflationary schedule is not a moving target. If the team is out there changing the schedule every year or so, why wouldn’t they continue to change it, and to what. The argument above articulates a worst case scenario and it could be argued it is fairly unlikely to actually materialize, so is it worth the risk of developing a reputation for changing the inflationary schedule on a dime when it suits the network to plan for an event that might not happen?

With the above argument in mind, I think that it is crucially important that if the schedule is changed right now, it is done so thoughtfully and under the assumption that this will be the last major change at least for the foreseeable future. And to the extent that we can, if there are any long term decisions that could be built in (like ongoing inflation for example) that they are considered now.

Proposal

Current Emissions Schedule

The following charts represent the current weekly emissions schedule and total cumulative tokens. You can see on the right that the emissions schedule is essentially a smooth curve, but from the chart on the left, it is obvious that it is not.

Current SNX Inflationary Schedule

Decision framework for new emissions schedule

Assuming you want a new emissions schedule to closely resemble the current, but with a smooth curve vs a set of step changes, there are a few axiomatic decisions to make. Once these decisions have been made, modelling the optimal curve just becomes math and everything basically falls into place.

Those decisions are:

  • When to start the new schedule:
    • Before 52 weeks, or
    • Wait until end of first 52 weeks (like current schedule)
  • Ongoing tail emissions VS drop to zero tail emissions? If ongoing would you have:
    • Fixed annual % increase, if so what % of total pool
    • Fixed absolute, if so what fixed # of snx tokens, or
    • Fixed weekly decline (no decision needed you would simply copy the existing weekly decline)

Proposed Schedule

I built a model to help run various different scenarios. After running multiple scenarios the one I ultimately landed on is as follows:

  • Start new schedule at week 30: rationale - the earlier we move over the more gradual the curve is and the closer we can match original schedule
  • Ongoing inflation at 3% annual: rationale - both a flat and a declining emissions schedule tends towards zero over time. I think if you are going to have a schedule that you should have something that gives the foundation a meaningful lever vs something that essentially disappears. The foundation can always remove it, but it is probably hard to add it later. I chose 3% somewhat arbitrarily but I think intuitively looking at economies as a watermark, something between 3-5% seems about right.

With these inputs, the model outputs the following scenario:

  • At week 30, total weekly emissions in absolute terms start reducing at 1.3% per week. I.e. if last week 100,000 snx were emitted, this week 98,700 snx would be emitted
  • At week 208, weekly emissions in relative terms start growing at 0.0577% per week (3% per year). I.e. if total snx pool is 250,000,000 at the beginning of the week, 144,250 snx will be emitted that week

You can see what the emission schedule looks like in terms of total snx minted weekly, plus cumulative over time below.

Proposed SNX Inflation Schedule

Perhaps more interesting is that you can see what the new emissions schedule looks like when overlaid over the old.

Current   Proposed SNX Inflation Schedule

The total weekly emissions closely mirrors the original schedule until the tail emissions schedule kicks in. You can also see that the total tokens almost exactly matches the current schedule until tail emissions kick in at which point it continues.

Next steps

I think that the following decisions need to be debated and made by the community:

  1. Should we change the emissions schedule, if so
  2. Which criteria should be considered in addition to the decision framework articulated above
  3. When to start the new schedule:
    • Before 52 weeks, or
    • Wait until end of first 52 weeks (like current schedule)
  4. Should we have ongoing tail emissions VS drop to zero tail emissions? If ongoing would you have:
    • Fixed annual % increase, if so what % of total pool
    • Fixed absolute, if so what fixed # of snx tokens, or
    • Fixed weekly decline (no decision needed you would simply copy the existing weekly decline)

Once there is general consensus on the above this proposal could then be tightened up and turned into something that the entire community can vote on.

Cheers
deltatiger

Time based staking rewards

The current rewards calculation is snapshot based. With a single snapshot taken each week at ~08:00 UTC on Wednesday. The percentage of the global debt represented by each wallet is recorded and rewards are allocated proportionately. This mechanism presents an attack vector on the system as a free rider could potentially mint just before each snapshot, then burn immediately after, receiving rewards without risk. In practice this has not been a common strategy, though it has been observed. The main reasons for this are that most wallets do not retain sufficient Synths to fully clear their debt before each snapshot making this strategy less effective. In addition there are incentives to keeping sUSD or sETH in the depot and/or Uniswap, due to potential liquidity in the case of the depot or LP rewards for the sETH pool.

However, if there is one pattern we have observed this year it is that any attack vector in the system will eventually be exploited, so rather than accept this latent risk and assume the current friction will not be automated away we have spent time on R&D to close the attack preemptively. This issue will outline a SIP that will change the mechanism for calculating rewards to rely on not only debt at the time of the snapshot but the average debt over the course of the period for each wallet.

To understand how this mechanism works I will first provide an overview of the current debt register.

To calculate rewards on-chain we must compress the information about the debt distribution to a few operations. With thousands of wallets minting and burning each week, storing the state of each wallet and calculating rewards would not be viable on-chain. Instead we use the fact that a mint or burn event has an equal impact on every other wallet in the system. To illustrate this, we will consider a three player game.

Screen Shot 2019-12-02 at 9 25 37 pm

At T0 Alice and Bob have 50% of the debt.
At T1 Carol enters and doubles the global debt, reducing the debt of both Alice and Bob by 50%.

The key is that the starting debt is irrelevant for either Alice or Bob, they will both be equally impacted by this change or debt delta. It also holds that no matter how many players are in the game any increase or decrease in the global debt by any player will impact each other player equally.

Once we identify this property of the system we can then compress all mint and burn events debt deltas by tracking the cumulative debt delta, CDD which is the product of each debt delta.

For any wallet we can determine their current debt by storing three values, the initial debt percentage of the global debt they represented, the CDD index number they entered the system at and the CDD index number now. Take the simple case of Alice:

At T0 Alice enters the system as the only player with 100% of the debt.
Between T1 and T4 Bob doubles the debt three times.

T1: 100 | 0
T2: 50 | 50
T3: 25 | 75
T4: 12.5 | 87.5

Alice was 100% of the debt, and the CDD at T3 is 0.5 * 0.5 * 0.5 = .125 so we divide the CDD at T3 by the CDD at T0 multiplied by the initial debt percentage and we get .125/1 * 100% and we get 12.5% which is what the table above shows. Again, this holds for every case provided we have these three values.

This now allows us to only store the CDD and the opening debt percentage for each wallet and from this information alone we can determine the debt of every wallet at any given time, specifically at the close of the period for the purpose of calculating rewards.

This calculation also allows us to provide each wallet with an exact amount of debt that must be burned to exit the system at any time without tracking every event for every wallet individually.

When it comes to calculating rewards we take the snapshot for all wallets at the close of the period as if they had requested to exit the system and we then calculate the percentage of the debt each wallet represents and its proportion of the rewards for that period. As mentioned above this is suboptimal as it means that a wallet that has minted since the start of the period will get the same rewards as one that mints in the block immediately before the snapshot.

Increasing the number snapshots does not resolve this, as even with a high number of snapshots a system could be constructed to automate the minting and burning around each snapshot with the only cost being gas and some infrastructure.

To close this attack vector we need to track the debt of each wallet over time throughout the period. But as mentioned above we need to be able to compress this information into a few operations to make the calculation viable on-chain.

We need to record the time interval between each mint and burn in order to capture the impact of the changes to the debt register over time. This is a change to the current system where we discard the previous debt register information if a user mints of burns multiple times within a period, because currently we only care about the last mint/burn event.

The calculations can be found here: https://docs.google.com/spreadsheets/d/12FLqU_q4vn57qm9ZwDyBUjrWZOBDagX00YRTOV5n-DM/edit#gid=0

Screen Shot 2019-12-03 at 1 37 51 pm

The table above shows a simple case of three events, in the case of Alice we can determine her average debt position over time with the following formula:

Davg = (Initial Debt % * # of blocks elapsed * T0 Cumulative Debt Delta) + (Initial Debt % * # of blocks elapsed * T1 Cumulative Debt Delta) + (Initial Debt % * # of blocks elapsed * T2 Cumulative Debt Delta)

Which can be factored to:
Initial Debt % * SUM (# of blocks elapsed * CDD)

Which allows us to create a new value that we increment with each mint/burn event to capture the impact that mint/burn had on all other wallets and for how long, allowing us to determine their average debt percentage over the period.

We already store the debt register entry where the wallet last minted/burned and the debt percentage at that time. So now by combining this information with the time based CDD value from when they entered and the close of the period we can track all wallets.

The issue is this calculation requires that the entry debt percentage remains static during the period, so if someone mints or burns their starting debt percentage will change. In order to resolve this we need to have a new variable where we can capture the accrued fee pool percentage for a wallet when it mints or burns and then we can begin to track the fee accruals from that point forward. This also handles the case where a wallet joins for the first time or exits the system completely. Because in both cases we simply write the accrued fees to this variable and then wait for their next mint/burn or the close of the fee period whichever comes first.

So at the end of the fee period when a wallet claims fees we first read the accrued fee percentage variable, if this is zero then we calculate the fee percentage by looking at the entry debt percentage and multiplying that by the time based cumulative debt delta to work out the average percentage of the debt pool for that wallet during the fee period.

We now have a system that more fairly distributes rewards but that allows a wallet to exit or enter the debt pool during the fee period and only lose the fees during the time that they have no debt.

Once this proposal has been discussed and validated by the community I will write a SIP and the engineering team will determine the specific implementation within the system.

Mitigating price feed manipulation

We have seen some recent abuse of price feeds on illiquid synths (in particular sMKR and iMKR). In this writeup, I want to propose a few ideas to mitigate these attacks and future (potentially more dangerous versions). See this reddit thread for some more context on the current attacks.

1. Disabling sMKR and iMKR

This is a good start to stop the current ongoing attack. See my proposed SIP 34 for more details.

I also think we should have a discussion on liquidity requirements for other current synths and what we will require for future synths. But that is lower priority than fixing the immediate attack vector.

2. Upper limits on any synth with dangerously low liquidity

A more dangerous version of this attack would happen if an even lower liquidity underlying had its orderbooks completely cleared. Such oracle attacks have happened in the past in crypto and if successful could completely destroy the project. It is therefore of crucial importance that any such attacks be mitigated. For example, suppose that an attacker was able to clear the orderbooks of a particular asset and set the price to 100BTC while holding a large number of synths. They would then be able to quickly exit via the uniswap pool. It's unclear whether such an attack would currently be profitable -- but it is nevertheless extremely dangerous.

I therefore believe that at a minimum, any synth that has some illiquidity dangers (definitely sDEFI, possibly sMKR (if not removed), sCEX, sTRX, sXTZ) needs to have an upper limit (similar to the lower limit of the inverse synths). This will put a cap on profit of this attack, almost certainly making it infeasible. It also turns it into more of a moderate severity rather than a project killing black swan.

It surely is some inconvenience for traders who may find their positions hitting the upper limit and getting closed. But given the risks involved, I think we need such a limit regardless.

3. Upper limit on minting

The SNX token itself also has a serious liquidity problem that I believe presents some danger. We do have some ideas to incentivize liquidity. However, in the meantime, it also seems somewhat vulnerable to an orderbook clearing attack. An attacker that can set the price of SNX arbitrarily can potentially mint an arbitrary amount of synths and then exit through the uniswap pool. Again, it's not clear whether such an attack would currently be profitable, but it is another existential black swan risk for the project.

Furthermore, even in the case of extremely high volatility without manipulation, it could still pose a risk. Without liquidation/direct redemption, there are incentive problems for minters if their c-ratio gets near or below 100%, as they may prefer to just "walk" away and default on their debt rather than pay back more debt than their SNX is worth. So volatility that results in 90%+ drawdowns in the span of 1-2 weeks (a single fee period) could cause severe issues by itself.

In order to mitigate this, I suggest that we implement a new oracle system for the purposes of determining c-ratio. For example, one choice could be min(current SNX price, 2*(7 day lagged SNX price). This would make oracle attacks/orderbook clearing attacks impractical while also reducing the likelihood of severe volatility resulting in incentive issues.


All thoughts welcome. Suggestion 1 should in my opinion be a high priority, the other two may not be as high of a priority (but I still advise addressing those attacks as a successful attack could completely wipe out minters).

SIP: Add sOMG and iOMG synths

Simple Summary

Add sOMG and iOMG synths to Synthetix ecosystem

Motivation

Recently, OMG has seen very strong trading volumes surpassing $100M+/day. However, it currently has very little liquidity in DeFi, precluding those without access to centralized exchanges which it trades on. In addition, there is ample interest to short OMG, but this is not available via centralized exchanges, and only through OTC venues.

OMG has much stronger orderbook depth than many synths that currently exist and satisfy the listing framework proposed by Delphi Digital

Draft SIP: SNX Community Fund Allocation Through Inflation

TLDR;

As SNX begins to smooth current inflation rates, we believe that now is the best time to propose an ongoing-inflation treasury allocation to incentivize further growth and development of the Synthetix platform. We propose that this treasury be funded via a continuous allocation of weekly inflation, and that the funds accumulated be managed directly by the community through a multi-sig wallet.

We propose naming this treasury vehicle the SNX Community Fund, and allocating 5-15% of weekly inflation towards it.

We propose initiating a council, made up of team and community members, to govern and allocate the SNX Community Fund.

Background & Reasoning

The changes that were introduced in March 2019 spawned a fervent community of supporters of the Synthetix platform that have been driving growth for the last 7 months. As new challenges emerge, the team and community have proven their ability to alter the incentive structures, platform variables, and growth mechanics to subvert issues and continue the positive trajectory for Synthetix.

Over the last 7 months, we’ve also witnessed the massive success of discretionarily-allocated incentives for the Arb Contract, Uniswap sETH/ETH pool and Gitcoin Bounties. While some of these incentive systems have come from weekly inflation, others have been funded directly via the team/foundation. We believe there are substantial additional opportunities to build synth liquidity, acquire customers and drive network value if we can formalize the funding and allocation of similar growth initiatives.

Institutionalizing an ongoing method of funding critical protocol initiatives, and building the muscle of governing these funds in an open way, are key to the SNX’s long-term success. By allocating these funds early on in the network’s life cycle while inflation is high, we give SNX the operating leverage to pursue network-accretive initiatives far down the road. By opening up the funds to be managed by the community, we take some of our first steps as a community towards truly decentralizing the platform; further engaging our stakeholders, mitigating regulatory risk, and increasing the level of trust and transparency held throughout the network.

Proposed Model

We believe that allocating a percentage of SNX inflation rewards into a community-operated fund is the best way to grow long-term network value. We propose that the SNX Community Fund tokens be housed in a multi-sig contract and controlled by a group of elected council members. This allocation of SNX should be independent of any of the existing incentive mechanisms (sETH pool and Arb Contract).

Once the funds are in the multi-sig wallet, there needs to be a clearly defined process for what they can be used for, how decisions are made to fund proposals, and how to manage the ongoing treasury budget. We’ve detailed our proposed process for this in the next section.

Below are a few options for fund allocations based on the proposed new inflation schedule in Deltatiger’s SIP. This proposal allocates the specified percentage inflation into a multi-sig wallet starting week 40 and includes a tail inflation of 100k SNX per week after year 4 (TBU with inflation change proposals).

Screenshot 2019-10-23 12 14 32

Screenshot 2019-10-23 12 14 41

Screenshot 2019-10-23 12 14 52

Treasury Governance

Other blockchains use similar methods for governance and management of a treasury. Edgeware and Polkadot both implement a council of elected members to veto proposals, approve treasury expenditures and vote on proposals on behalf of network participants. While this model is likely too burdensome for our purposes, we can learn from their intentions and find something that fits our needs.

We like how the Uniswap pool is managed by a multi-sig wallet and that think something similar, but with more specifically defined responsibilities (a formal council), could be a good starting place for how to govern the treasury.

Tactically, we propose formalizing council-member responsibilities to be:

  • Attending monthly community discussions where new funding (SIP) proposals are introduced
  • Evaluating new (SIP) proposals and writing feedback on each (if any are proposed in a specified timeframe)
  • Engaging with feedback from the community
  • Deciding how much, if any, best fits the needs of the proposal
  • Collaborating with other council members to unify potential options
  • Make decision on how to move forward for each proposal

For the initial set of council members, we believe that anywhere from 5-10 is a good number of participants. We think that having more Synthetix team members with the majority of seats at the start makes the most sense, but eventually transitioning to more independent community members. We also think the best model is to have 6-month terms, and elections voted on by the community (in Discord).

Reasons For This Model

We believe this model for a treasury funded through an allocation of the weekly inflation would accomplish the goal of allowing the platform to iterate and experiment without facing roadblocks during ad-hoc requests, as well as showcase decentralized governance to build trust and transparency within the community.

Some of the future use cases that the treasury could be used to fund:

  • Referral program for sX users
  • Synth-specific liquidity pools (sUSD/USDC Uniswap pool)
  • Larger Gitcoin bug bounties
  • Community-driven feature development

Reasons Against This Model

The core reasons to be against this model are that:
Lack of belief that growth has been incentivized through specific mechanisms (i.e. Uniswap sETH pool LP rewards)
Lack of belief that the Synthetix network will continue to face growth issues
Belief that Synthetix Foundation should fund any future growth mechanisms
Belief that there should be a treasury, but that it should be controlled by the Synthetix team
Belief that an undefined governance model is preferable to a defined model
Belief that an on-chain governance model is preferable to a council-based model

Open Decisions

The open questions that need decisions are the following:

  • Should there be a treasury?
  • How should this treasury be funded?
  • How should the treasury be managed?
  • If there is a council, how do we elect people to it?
  • If there is a council, how many people should be on it?
  • If there is a council, how long should the terms be?
  • If there is a council, what quorum is needed to allocate to an initiative?

Arb pool: time based rewards and splitting into 2 contracts

Time based rewards for arb pool

Currently, the arb pool is not accomplishing its goals. There have already been two sets of improvements suggested by @kaiynne and @AndrewK respectively. However, in my opinion, neither solution really gets to the heart of the problem. I propose a time based rewards scaling method that achieves a more efficient use of inflation while also heavily discouraging manipulation. Implementing this as a starting point will also untie our hands when it comes to a number of other design choices that were previously discarded due to risk of manipulation.

Background

The way the current arb pool works is that when the conditions are met (sETH/ETH uniswap pool price <0.99 and SNX is currently in the contract), a user can send ETH to the contract, The contract then converts the ETH to sETH via uniswap (until the price is back to 0.99), stores the sETH, and sends back to the user (sETH received *(SNX/ETH market rate)) in SNX (as well as any unconverted ETH).

Currently, arb bots are heavily manipulating this system, see this comment by @nocturnalsheet. Ultimately, the issue is that the system is too generous and allows a large amount of slack which enables manipulation. Arb bots are competing based on speed and insider information (the fastest to send will often be the one who knows exactly when the arb pool condition will be triggered), rather than price.

Proposal

The way to solve both the problem of inefficient use of inflation as well as manipulation is to force arbers to compete based on price. Clearly, someone manipulating incurs non-zero costs in order to do so. As long as he is very likely to collect the arb pool rewards, these attacks may be very profitable indeed. However, given the chance, an honest arber that has no such costs will be willing to outbid them, earning a profit while denying profitability to these attacks.

The easiest and most elegant way to do this is by implementing increasing rewards over time, similar to an auction. I propose the following mechanism. A user sends ETH to the contract when the arb pool conditions are met and receives (ETH sent * SNX/ETH rate) + multiplier*SNX/ETH rate. So the user receives SNX at some discount determined by the multiplier. Note that in the current system, this multiplier is set to (sETH stored in contract - ETH sent by user) always. Instead, we should set this multiplier as an increasing function of both peg deviations and time.

A few possible multiplier function choices (see this google doc for a visualization):

  1. constant*(sETH stored in contract - ETH sent by user)*number of consecutive X minute periods in which pool is arbable

  2. constant*sum of deviations from peg in consecutive X minute periods in which pool has been arbable

  3. constant*(area under curve of deviations)

  4. constant*log(1+area under curve of deviations)

  5. some other function

The key idea is that the multiplier should start significantly lower than the current one to start with and gradually scale up (possibly to a cap at a certain level, either the current reward or a slightly higher one). I favor something that considers both the length of time and the severity of deviations. In this way, SNX should be sold at a much better rate than currently, and manipulation should be seriously diminished. Another benefit is that the system will be more robust to the contract becoming arbable at a price far away from the threshold (for example, if a user makes a large market sale with high slippage or if new SNX is added to the contract when the sETH price is far from 0.99)

Open questions:

  1. Should we introduce such a time based system?

  2. If so, what multiplier function should be used?

  3. Should we combine this proposal with any other proposed arb pool improvements (such as those proposed by Kain or AndrewK, or the multiple arb pool concept described below)?

Multiple arb contracts

This is not a new proposal, @gmgh/DegenSpartan has suggested this concept many times in the discord. Instead I want to lay out my vision for how such a system could work. First, I think there are two major goals of the arb pool: to help traders sell quickly at a price near 1:1 and to reassure traders, uniswap LPs, and other synth holders that their synths will not become worthless overnight.

Proposal

I would like to see two separate arb contracts that focus on one of these goals at a time. The maintenance contract would focus on goal 1, and would mostly be focused on smoothing demand for synths and keeping price largely in the .99-1.01 range. This contract's arb threshold would probably be set at 0.99 similar to the current contract. It should be expected that the vast majority of inflation dedicated to this contract will be used on a weekly basis, but there should be a focus on capital efficiency. So methods like locking SNX rewards, sending users mostly sETH instead of mostly SNX, etc can be considered.

A second contract, what I would call an assurance contract, is focused on the second goal of helping synth holders and arbers feel secure that the peg will hold up. A big problem with peg maintenance in the history of stablecoins has been the problem that arbers become less and less comfortable arbing as price gets further away from the peg. It may be a very good trade to buy a stablecoin at 0.97 intending to sell it at 1.00 and a very bad trade to buy a stablecoin at 0.87 intending the same thing. Often, getting off peg leads to a bank run type situation where many people start selling off in expectation of others selling off. In our current model, this would be especially dangerous as a large portion of synths are actually being used as uniswap liquidity. A panic situation could lead to liquidity being removed and synths being sold back to the pool, which makes the selling have even more of a market impact.

In order to avoid this, it is desirable to have an arb pool that builds up a large amount of SNX that becomes available only in rare cases. I propose a pool that becomes arbable only if sETH/ETH price goes down below 0.95, chosen because price has rarely if ever dropped that far in recent months. This should be optimized for maximum liquidity, so it should be paid entirely in (unlocked) SNX. If necessary to use this pool, we want arbers to be absolutely confident in getting paid.

It is likely that the pool would build up a sizable amount of SNX at this level, which would make a bank run type situation less likely. It also makes the game theory more desirable for arbers that don't go through the contract. If I want to buy sETH at 0.96 and sell when price gets back to 0.99, I am primarily concerned about the likelihood that price continues to drop down. If they know that there is a sizable supply waiting to support at 0.95, I will likely have a good risk/reward to begin buying sETH before it gets to that level. This mechanism means it is even less likely that this pool needs to be actually tapped into

The two major concerns with this idea in the past have been the prospect of manipulation and the risk of needing additional inflation. I believe that combining this with the time based rewards described above solves most of the manipulation risk. It's true that this may require additional inflation -- however, one important consideration is that inflation that sits in the arb pool indefinitely does not dilute holders until it is actually released. In the case that the peg breaks that hard, it is likely much better for SNX holders to be diluted than to allow a bank run type scenario to play out.

Open questions:

  1. Should we implement 2 or more contracts to achieve these dual goals?

  2. What should thresholds and inflation allocations be?

  3. What exact design tools should we use to achieve maintenance and assurance?

  4. Is there a serious risk of manipulation caused by creating an assurance contract?

SNX Uniswap Pool Staking Incentives

Simple Summary
This SIP proposes to introduce SNX Uniswap pool staking incentives in order to improve SNX liquidity. Inspired by previous, highly successful incentive initiative, the aim of this initiative is to create deeper SNX/ETH Uniswap pool by diverting a portion of SNX inflation to SNX liquidity providers. This will bring less slippage for larger trades and potentially much higher volume. More importantly, this incentive will decentralize SNX liquidity, meaning liquidity will be supplied by broader range of uncorrelated parties, based all over the world. This will make SNX liquidity fundamentally more robust, less likely to vanish in a crisis and more indicative of a healthy market.

Abstract
This SIP formalises at the protocol level to divert a portion of SNX inflation into a pool to incentivise liquidity providers of the SNX/ETH pair in Uniswap.

Motivation
Inflationary monetary policy, introduced in March 2018, was designed with the aim to create a strong incentive for SNX holders to stake their tokens by minting synths. For providing debt liquidity, SNX holders are rewarded with additional SNX on top of exchange fee rewards. This has led to enormous participation, currently ~ 81% of SNX tokens are staked. At the same time, it largely reduced available supply from where SNX liquidity could be provided. Not only that, but because of high SNX staking rewards (currently around 1 SNX per 100 SNX staked, per week) market making became largely unattractive. With current SNX trade volume on Uniswap, fee of 0.3% per trade can't compete with 1% per week that holders get by staking SNX. That's why incentivizing SNX liquidity providers is necessary.

Now, let's analyze how large should weekly incentive be in order to create 1,000,000 SNX + 6,400 ETH deep pool. We can arrive at this number by calculating weekly SNX rewards from alternative use of those funds (minting SNX reward + sETH LP reward).

weekly minting reward is (90% of inflation) ~ 1 SNX per 100 SNX staked
weekly sETH LP incentive is (5% of inflation) 72,000 SNX
SNX price is 0.0064 ETH
sETH pool is ~90,000 ETH deep (44,700 ETH + 45,500 sETH)

Minting reward (locked for 12 months) ~ 1,000,000 SNX / 100 = 10,000 SNX
sETH LP reward (unlocked) ~ 6400/(6400+90000) * 72000 SNX = 4780 SNX
Total ~ 15,000 SNX or ~1.1% of weekly inflation

*Because of decaying inflation, allocating fixed % of inflationary supply to SNX LPs will make sure that proportion between incentive initiatives stays constant.

Specification
See https://sips.synthetix.io/sips/sip-31

Rationale
By implementing SNX Uniswap staking incentives we can largely improve SNX liquidity, both quantitatively (deeper pool and increased volume) and qualitatively (decentralized, more robust and healthier liquidity).
Test Cases

Implementation

SCCP 18: Decrease Collateralisation Ratio to 700%

Simple Summary
Decrease the target Collateralisation Ratio for SNX stakers to 700%.

Abstract
There is currently a deficit of Synths (relative to demand) in the Synthetix ecosystem. Decreasing the target Collateralisation Ratio will cause some SNX stakers to increase amount of synths minted

Motivation
The Synth pegs (both sETH on Uniswap and sUSD on Curve) are off-peg. This means demand for Synths is high. There are currently other levers in the works that should contribute to tightening these pegs, but one such lever is the Collateralisation Ratio

Unfork from ethereum/EIPs

About once a week, someone accidentally submits a PR against ethereum/EIPs, likely because GitHub does that by default. This leads to a bit of noise in the EIPs repository and probably is frustrating for users that do it on accident. It would be nice if this repository were detached so that the default behavior was as desired, which is that PRs are against this repository and not ethereum/EIPs.

This Stack Exchange comment suggests that you can get the GitHub virtual assistant bot to do it for you, assuming you are the repository owner: https://stackoverflow.com/questions/29326767/unfork-a-github-fork-without-deleting/41486339#comment116493608_29326840

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.