Giter Site home page Giter Site logo

contracts-v2's Introduction

Notional Finance Monorepo

This is the monorepo for all the Notional finance code. It is split into the following subpackages:

  • @notional-finance/contracts: smart contract code and tests using Buidler.
  • @notional-finance/sdk: A typescript based SDK that contains user centered logic for interacting with Notional contracts, including calculating rates off line. We will continue to expand and update this SDk as the underlying contracts evolve so that UIs have an abstraction layer to work with.
  • @notional-finance/subgraph: Graph Protocol Subgraph subgraph for caching contract interactions
  • @notional-finance/web: Web frontend using React

Getting Started

  • You can learn more about the design of Notional from the whitepaper.

Contract Details

Developers

Generate a React Application

Run nx g @nrwl/react:app my-app to generate an application.

When using Nx, you can create multiple applications and libraries in the same workspace.

Generate libraries

Grouped Library

Uses the following syntax Run nx g @nrwl/react:lib {features|common|shared}/lib-name --standaloneConfig --component false --buildable --import-path @notional-finance/lib-name to generate a library.

NPM Publishable Library

Run nx g @nrwl/js:lib lib-name --standaloneConfig --component false --publishable --buildable --import-path @notional-finance/lib-name

Libraries are shareable across libraries and applications. They can be imported from @notional-finance/mylib.

Development server

Run nx serve web for a dev server. Navigate to http://localhost:3000/. The app will automatically reload if you change any of the source files.

Code scaffolding

Run nx g @nrwl/react:component my-component --project=my-app to generate a new component.

Build

Run nx build my-app to build the project. The build artifacts will be stored in the dist/ directory. Use the --prod flag for a production build.

Running unit tests

Run nx test my-app to execute the unit tests via Jest.

Run nx affected:test to execute the unit tests affected by a change.

Understand your workspace

Run nx graph to see a diagram of the dependencies of your projects.

Further help

Visit the Nx Documentation to learn more.

☁ Nx Cloud

Distributed Computation Caching & Distributed Task Execution

Nx Cloud pairs with Nx in order to enable you to build and test code more rapidly, by up to 10 times. Even teams that are new to Nx can connect to Nx Cloud and start saving time instantly.

Teams using Nx gain the advantage of building full-stack applications with their preferred framework alongside Nx’s advanced code generation and project dependency graph, plus a unified experience for both frontend and backend developers.

Visit Nx Cloud to learn more.

contracts-v2's People

Contributors

elnilz avatar jeffywu avatar t-woodward avatar weitianjie2000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

contracts-v2's Issues

Incorrect calculation when getting nTokensToLiquidate

The following code does not match the commented math:

// benefitGained = perpTokensToLiquidate * (liquidatedPV - freeCollateralPV)
// benefitGained = perpTokensToLiquidate * perpTokenPV * (liquidationHaircut - pvHaircut)
// perpTokensToLiquidate = benefitGained / (perpTokenPV * (liquidationHaircut - pvHaircut))
nTokensToLiquidate = benefitRequired.mul(Constants.INTERNAL_TOKEN_PRECISION).div(
factors.nTokenValue.mul(haircutDiff).div(Constants.PERCENTAGE_DECIMALS)
);
}

The "perpTokenPv" in the math describes the Pv of a single perpToken without any valuation haircut applied. But "nTokenValue" is the aggregate PV of the account's total nTokenBalance with the valuation haircut applied, not one single nToken without any haircut applied.

So to get the code to match the math, you would have to first divide nTokenValue by nTokenBalance and "nToken.parameters[Constants.PV_HAIRCUT_PERCENTAGE]" before solving for nTokensToLiquidate. Similar to what you do here:

nTokensToLiquidate
.mul(int256(uint8(factors.nTokenParameters[Constants.LIQUIDATION_HAIRCUT_PERCENTAGE])))
.mul(factors.nTokenValue)
.div(int256(uint8(factors.nTokenParameters[Constants.PV_HAIRCUT_PERCENTAGE])))
.div(balanceState.storedNTokenBalance);

Update getAnnualizedSupplyRate to recommended formula

It seems that the “getAnnualizedSupplyRate” function in “cTokenAggregator.sol” uses the formula that is different from the formula recommended in Compound docs.

Used formula: Supply rate per block * blocks per year * notional rate precision / supply rate precision
Recommended formula: APY = ((((Rate / ETH Mantissa * Blocks Per Day + 1) ^ Days Per Year)) - 1) * 100

Update totalBitsSet to be more efficient

The way you use to count “one” bits in a word is very inefficient. Here is how I would do this:

function setBitsCount (uint x) public pure returns (uint) {
    x = (x & 0x5555555555555555555555555555555555555555555555555555555555555555) + (x >> 1 & 0x5555555555555555555555555555555555555555555555555555555555555555);
    x = (x & 0x3333333333333333333333333333333333333333333333333333333333333333) + (x >> 2 & 0x3333333333333333333333333333333333333333333333333333333333333333);
    x = (x & 0x0707070707070707070707070707070707070707070707070707070707070707) + (x >> 4);
    x = (x & 0x000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F) + (x >> 8 & 0x000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F);
    x = x + (x >> 16);
    x = x + (x >> 32);
    x = x  + (x >> 64);
    return (x & 0xFF) + (x >> 128 & 0xFF);
}

Incorrect calculation in limitPurchaseByAvailableAmounts

The check on collateralAvailable is not properly applied:

if (fCashLiquidationPV > c.factors.collateralAvailable.add(fCashBenefit)) {
fCashToLiquidate = c.factors.collateralAvailable.mul(Constants.RATE_PRECISION).div(
liquidationDiscountFactor
);

The net change to collateralAvailable is -fCashCollateralPV + fCashBenefit, not -fCashLiquidationPV + fCashBenefit where fCashCollateralPV equals the fCashPV after applying the riskAdjustedDiscountFactor, not the liquidationDiscountFactor.

collateralAvailableNew = collateralAvailable - fCashCollateralPV + fCashBenefit
collateralAvailableNew = collateralAvailable - (fCashCollateralPV - fCashBenefit)

So you want to check that fCashCollateralPV - fCashBenefit is less than collateralAvailable. Equivalently you can check if fCashCollateralPV > collateralAvailable + fCashBenefit, and then modify fCashToLiquidate if it is as you do. Solving for the new value for fCashToLiquidate:

fCashCollateralPV - fCashBenefit = collateralAvailable
fCashCollateralPV = fCashToLiquidate * riskAdjustedDiscountFactor
fCashBenefit = fCashToLiquidate * (liquidationDiscountFactor - riskAdjustedDiscountFactor)

fCashToLiquidate * riskAdjustedDiscountFactor - fCashToLiquidate * (liquidationDiscountFactor - riskAdjustedDiscountFactor) = collateralAvailable
fCashToLiquidate * (2 * riskAdjustedDiscountFactor - liquidationDiscountFactor) = collateralAvailable
fCashToLiquidate = collateralAvailable / (2 * riskAdjustedDiscountFactor - liquidationDiscountFactor)

And then we also need to change this:

c.factors.collateralAvailable = c.factors.collateralAvailable.sub(
fCashToLiquidate.mul(riskAdjustedDiscountFactor).div(Constants.RATE_PRECISION)

collateralAvailableNew = collateralAvailable - (fCashCollateralPV - fCashBenefit)

Aztec <> Notional bridge

We are looking to have a bridge contract written for Aztec <> Notional that would allow DeFi users to lend on Notional via Aztec's zk.money. This would introduce a layer of privacy and a lower gas option to using Notional on eth mainnet. The grantee will need to fill out an application for the grant here: https://medium.com/aztec-protocol/private-defi-with-the-aztec-connect-bridge-76c3da76d982

The grant recipient will be eligible for a 3k grant (in NOTE) from Notional and 2k grant from Aztec on completion.

More info on the resources available: https://aztecnetwork.notion.site/aztecnetwork/Aztec-Grants-19896948e74b4024a458cdeddbe9574f

Here's more on the bridge contracts: https://medium.com/aztec-protocol/private-defi-with-the-aztec-connect-bridge-76c3da76d982

And finally a repo https://github.com/AztecProtocol/aztec-connect-starter

This grant will be posted on gitcoin

Update comments

  • Add additional commentary on calculateLiquidationAmounts function
  • Fix comment about rateScalars / liquidity Token Haircuts size in cash groups
  • Update comments in remapBitmap code

Add a check for deleted assets to _hasLiquidityTokens in LiquidateCurrency

During liquidation, assets are first settled and finalized before reading from storage again during getLiquidationFactors. As a result, there should not be any deleted assets when we go to look for liquidity tokens in a portfolio. However, it still doesn't hurt to add a check for deleted assets in this method.

Confusing terminology in calculateMaxLiquidationAmount

  • This function calculates the liquidationAmount, not the maxLiquidationAmount.
  • MAX_LIQUIDATION_PORTION does not represent a maximum amount. It represents instead a default amount.
  • maxAllowedAmount is not the maximum amount that a user is allowed to liquidate. If initialAmountToLiquidate is greater than maxAllowedAmount, result can be greater than maxAllowedAmount

Incorrect Liquidity Fee Calculation

We still calculate the liquidity fee assuming the old (non-compounding) rate frame work. If we want to charge XX bps on the annualized continuously compounded interest rate we have to do it slightly differently.

We first call getExchangeRate here:

(tradeExchangeRate, success) = getExchangeRate(

Then we separately calculate the liquidity fee and add/subtract that to/from the exchange rate here:

if (fCashAmount > 0) {
uint fee = cashGroup.getLiquidityFee(timeToMaturity);
tradeExchangeRate = tradeExchangeRate.add(int(fee));
} else {
uint fee = cashGroup.getLiquidityFee(timeToMaturity);
tradeExchangeRate = tradeExchangeRate.sub(int(fee));

Ultimately, what we want to solve for is this:

tradeExchangeRate = exp((tradeInterestRateNoFee + Fee) * t)

What we have access to after we call getExchangeRate is tradeExchangeRateNoFee, defined like this

tradeExchangeRateNoFee = exp((tradeInterestRateNoFee) * t)

Some math:

tradeExchangeRate = exp((tradeInterestRateNoFee + Fee) * t)
= exp(tradeInterestRateNoFee * t + Fee * t)
= exp(tradeInterestRateNoFee * t) * exp(Fee * t)
= tradeExchangeRateNoFee * exp(Fee * t)

So instead of adding/subtracting the liquidity fee, we need to calculate exp(Fee * t) and multiply that by tradeExchangeRateNoFee

Incorrectly summing underlying values with asset values in freeCollateral calculation

The summing is done in three places:

int256 netLocalAssetValue = netCashBalance.add(nTokenValue).add(portfolioBalance);

int256 netLocalAssetValue = netCashBalance.add(nTokenValue).add(portfolioBalance);

int256 netLocalAssetValue = netCashBalance.add(nTokenValue).add(portfolioValue);

The portfolioBalance is represented in underlying terms, but it is aggregated with the netCashBalance and nTokenValue figures which are denominated in asset terms.

incorrect calculation in liquidityToken withdrawal during liquidation

These calculations are slightly incorrect:

w.incentivePaid = assetAmountRemaining
.mul(Constants.TOKEN_REPO_INCENTIVE_PERCENT)
.div(Constants.PERCENTAGE_DECIMALS);
// Otherwise remove a proportional amount of liquidity tokens to cover the amount remaining.
int256 tokensToRemove =
asset.notional.mul(assetAmountRemaining).div(w.netCashIncrease);

This calculation treats assetAmountRemaining as if it were netCashIncrease as opposed to netCashToAccount per the below nomenclature:

// We can only recollateralize the local currency using the part of the liquidity token that
// between the pre-haircut cash claim and the post-haircut cash claim. Part of the cash raised
// is paid out as an incentive so that must be accounted for.
// netCashIncrease = cashClaim * (1 - haircut)
// netCashIncrease = netCashToAccount + incentivePaid
// incentivePaid = netCashIncrease * incentive

If we do it this way, assetAmountRemaining wouldn't actually be exactly zeroed out, so it would be erroneously set to zero here:

Instead we can convert assetAmountRemaining to netCashIncrease before calculating incentivePaid:

netCashIncrease = netCashToAccount + incentivePaid
netCashIncrease = netCashToAccount + netCashIncrease * incentive
netCashIncrease * (1 - incentive) = netCashToAccount
netCashIncrease = netCashToAccount / (1 - incentive)

incentivePaid = netCashToAccount * (incentive / (1 - incentive))

Switch to continuous compounding

Overview:
Using continuous compounding keeps interest rates consistent across periodSizes and ensures that we don't run into any issues if we extend the maturity horizon significantly

Changes:

  1. Change the implied rate calculation here: https://github.com/swapnet-protocol/swapnet-lite/blob/6b782f91be1d2b453fbdbdd9ebda282a6c9683d2/packages/contracts/contracts/CashMarket.sol#L1157-L1159
periodNum = timeToMaturity / G_MATURITY_LENGTH
rate = math.log(exchangeRate) / periodNum
  1. Change the initial anchor calculation here: https://github.com/swapnet-protocol/swapnet-lite/blob/6b782f91be1d2b453fbdbdd9ebda282a6c9683d2/packages/contracts/contracts/CashMarket.sol#L326-L334
periodNum = timeToMaturity / secondsInYear
anchor = math.exp(periodNum * (G_RATE_ANCHOR  - 1))

Variable renaming to clarify what is in asset terms and what is in underlying terms

Variables to rename:

int256 totalCurrentCash;

int256 totalCurrentCash;

Questions:
Is this asset cash or underlying?

int256 preFeeCashToAccount =

Is this asset cash or underlying?

netCashToAccount.mul(rateAnchor).div(Constants.RATE_PRECISION).neg();

Is this asset cash or underlying?

derivative = cashAmount

Are these asset cash or underlying?

(int256 netCashToAccount, int256 netCashToMarket, int256 netCashToReserve) =

Do these functions return asset cash or underlying values?

function _getNTokenHaircutValue(

function _getPortfolioAndNTokenValue(

function _getBitmapBalanceValue(

function _getBitmapPortfolioValue(

Governance Proposal Vulnerabilities

From ABDK:

Looking at the “cancelProposal” function inside “GovernerAlpha.sol”. It seems that a proposer has an ability to cancel his proposals at any time, regardless of how many people voted for these proposals. This makes the voting potentially unfair, as the proposer has “veto” right of the propose, while other voters don't have such right.
I mean this code:

require(
    msg.sender == guardian ||
        note.getPriorVotes(proposal.proposer, blockNumber - 1) < proposalThreshold,
    "GovernorAlpha::cancel: proposer above threshold"
);

The idea is that if the voting power of the proposer will drop below threshold before the proposal will be executed, anybody may cancel the proposal. But the propoer may intentionally reduce his voting power to be able to cancel his own proposal, even if this proposal already succeeded. This looks like an abuse that should not be permitted.

Different attacks are opssible, I believe. Here is one example:

  1. A good proposal is discussed on social media. The proposal will most probably succeed. Someone just needs to propose it.
  2. A malicious user, who wants to sabotage the proposal, proposes it from his own address and starts advertising it.
  3. People vote for the proposal, the proposal succeeds and is queued.
  4. Just before the execution window opens, the proposer cancels the proposal.
  5. As a result, the community lost several days and needs to start over.
  6. Now other users make the same proposal, probably several people will do this at once.
  7. The malicious user also makes several copies of his proposal from different addresses.
  8. Now users need to somehow distinguish porposals made by honest users from those made by the malicious user, and only vote for the proposals from trusted proposers. This breaks the original idea (as I understand it) that users don't need to trust proposers, but only trust their proposals.
  9. Also, in case there are several similar proposals made by several honset proposers, users will need to somehow coordinate off-chain and choose which one to vote for. This breaks the idea that voting is made on-chain and doesn't require any off-chain coordination.
  10. All this could disrupt the voting and no proposal could acheive quorum.
  11. In such case the community will need to start over etc.

It seems that the voting schema implemented in the “GovernorAlpha.sol” is vulnerable to certain kinds of attacks.
One attack is when a bad proposal is proposed by a malicious user. As the proposal is bad, almost nobody vote for it and it doesn't acheive quorum. As long as the proposal is not going to succeed, most of the users, who are actually against this proposal, will not waste their gas voting against. However, the malicious proposer may vote for this proposal with large number of votes few seconds before the end of the voting, so other users will just don't have anough time to vote agains.
Another attack is that there could be proposals that are good when executed once, but are bad when executed more than once. Once such proposal is proposet, a malicious user may propose a copy of it and start advertising it. As proposals are basically the same, and are good, when considered in isolation, some users will vote for the original proposal, while another will vote fo the second one. Probably some users will vote for both. It is very possible that both proposals will suuceed resulting in a kind of double spending or something like this.

Rounding issues when converting from internal to external

From ABDK:

BTW, I see across the code that you deal with token decimals as if token amounts would be fractional. And you convert amounts from external precision to internal (8 decimals) and back. However, for me this is wrong approach. Token amounts are actually integers, and should be treated as such. The (optional) "decimals" property was initially introduced as a hint for UIs to help rendering token amounts in a more human-friendly way. But smart contracts should usually just ignore this property.
Treating token amounts as fractional numbers just makes code more complicated and introduces rounding errors.

This is true only if you assume that a whole token has reasonable price (not too high, not too low), while this assumption could be false. There could be a token with 18 decimals, where even a whole token (10^18 base units) cost almost nothing and real amounts even measured in whole tokens are very large. And there could be a token with, say, 18 decimals, where 1 base unit costs something, and a whole token is actually an very large value.
So what you basically is talking about is a scale factor, that you could introduce to make a whole fCash token to have reasonable price even if underlying token's price is unreasonable.
Now you try to figure out this scale factor from decimals, but this is not reliable.
This scale factor is similar to the "lot" concept on traditional exchanges.

Several DeFi protocols had problems listing certain tokens, because of similar assumptions. They assumed either that a whole token has reasonable price or that a base unis has extremely low price. It is better not to rely on such things.

Also, I see that when converting between internal and external precision you always round down. This could lead to a situation, when the contract doesn't have enough assets to transfer due to accumulated rounding errors. A common best practice is to always round towards protocol, i.e. against user. This would require two versions of conversion functions, one that rounds up and another that rounds down.

Updates to Portfolio Handler gas efficiency

  • When dealing with arrays in memory, store the current element into a local variable rather than accessing via the array index every time.
  • Store the initial slot in the portfolio state object so that we don't need to recalculate later when we store it back.
  • Use the initial slot to determine the maxActiveSlot via something like maxActiveSlot = initialSlot + storedAssetLength

Make bitNum in bitmap portfolios more efficient

Now you extract bit indexes from bitmap currency, and then convert these bit indexes to/from maturity. Extracting bit indexes from a word is quite expensive. It would be cheaper to extract bits, rather than bit indexes (bit = 2^bitIndex). This would make iteration through set bits significantly cheaper, maturity->bit conversion about the same cost, and bit -> maturity conversion a bit more expensive.
06:07

not sure i totally follow
06:12

Mikhail Vladimirov
OK. You have a word used as a bitmap. You want to iterate through all the “one” bits it it.
06:17

Currenlty, you just check bits one by one. This is suboptimal.
edited 06:17

Taking into account, that there couldn't be too many “one” bits, you may optimize, by checking several bits at once via “and” mask, and in case all the checked bits are “zero”, skip them all at once.
06:19

right i see
06:19

Mikhail Vladimirov
However, there is a very efficient way to find the lowest “one” bit in a word:

x & (~x + 1)
06:20

So you may directly jusmp to the next “one” bit, process it, remove from the word (just by subtracting) and proceed and proceed to the next “one” bit.
06:21

yeah that makes sense to me
06:21

Mikhail Vladimirov
The only problem is that this way you will get bits themselves (i.e. 1, 2, 4, 8, 16, 32, ...) instead of their indexes (0, 1,2, 3, 4, 5, ...)
06:21

But in your case there is no big difference whether to use bit indexes or bits themselves to identify maturities.
06:22

right, we just need to change the calculation in date time a little
06:22

Mikhail Vladimirov
Only bitnum -> maturity conversion will become more expensive...
06:22

But you may convert bit to its index quite efficiently using binary search
edited 06:23

maturity => bitnum is just going to be something like 1 << offset
06:24

Mikhail Vladimirov
Right, you need to use << and >> instead ofg + and -, and start with one rather than with zero.
06:24

Should be about the same cost.
06:24

right so the other way, we can use something like this function getMSB(uint256 x) internal pure returns (uint256 msb) {
        if (x >= 0x100000000000000000000000000000000) {
            x >>= 128;
            msb += 128;
        }
        if (x >= 0x10000000000000000) {
            x >>= 64;
            msb += 64;
        }
        if (x >= 0x100000000) {
            x >>= 32;
            msb += 32;
        }
        if (x >= 0x10000) {
            x >>= 16;
            msb += 16;
        }
        if (x >= 0x100) {
            x >>= 8;
            msb += 8;
        }
        if (x >= 0x10) {
            x >>= 4;
            msb += 4;
        }
        if (x >= 0x4) {
            x >>= 2;
            msb += 2;
        }
        if (x >= 0x2) msb += 1; // No need to shift xc anymore
    }
06:24

Mikhail Vladimirov
right so the other way, we can use something like this function getMSB(uint2

Jeff Wu
This is for backward conversion (bitnum -> maturity)
06:25

yes
06:25

so i pass in something like 00010000 and i want to convert that to "5"
06:25

or..."4" i guess if im going from the big end

setBalanceStorageForSettleCashDebt may set active currency to false if nToken balance is still active

In "setBalanceStorageForSettleCashDebt" function inside "BalanceHandler.sol":

if (cashBalance == 0) {
    accountContext.setActiveCurrency(
        cashGroup.currencyId,
        false,
        Constants.ACTIVE_IN_BALANCES
    );
}

This code may reset "ACTIVE_IN_BALANCES" flag even if nTokenBalance is not zero. In other place this flag is reset only when both, cash balance and nToken balance are zero.

Unnecessary invocation of calculateLocalToPurchase

Why invoke calculateLocalToPurchase twice here?

int256 localToPurchase;
(collateralToRaise, localToPurchase) = LiquidationHelpers.calculateLocalToPurchase(
factors,
liquidationDiscount,
collateralToRaise,
collateralToRaise
);
// Enforce the user specified max liquidation amount
if (maxCollateralLiquidation > 0 && collateralToRaise > maxCollateralLiquidation) {
collateralToRaise = maxCollateralLiquidation;
// prettier-ignore
(
/* collateralToRaise */,
localToPurchase
) = LiquidationHelpers.calculateLocalToPurchase(
factors,
liquidationDiscount,
collateralToRaise,
collateralToRaise
);
}

Instead, you could just do this check:

// Enforce the user specified max liquidation amount
if (maxCollateralLiquidation > 0 && collateralToRaise > maxCollateralLiquidation) {
collateralToRaise = maxCollateralLiquidation;

And then calculateLocalToPurchase:

(collateralToRaise, localToPurchase) = LiquidationHelpers.calculateLocalToPurchase(
factors,
liquidationDiscount,
collateralToRaise,
collateralToRaise
);

I think that would be equivalent and would allow you to drop the first invocation of that function.

Update average incentive calculation to use time weighted integral values

Here is how it works:

  1. You have the current total supply value stored
  2. You have the integral total supply value stored
  3. You have the timestamp stored of when the total supply value last changed
    When you are about to change the total supply you do the following:

integral_total_supply += current_total_supply * (current_time - total_supply_last_changed)
total_supply_last_changed = current_time

So if you know the integral total supply values at the beginning and the end of end of the integral (S0 and S1) and the interval length in seconds (T), then the average total supply for the interval is (S1 - S0) / T.

Incorrect conversion between exchange rates and implied rates

In market.sol we convert between exchangeRates and impliedRates in two places - here:

function getImpliedRate(

and here:
function getExchangeRateFromImpliedRate(

The math in these functions is fine, but the way they are invoked is incorrect because timeToMaturity is passed as an absolute number of seconds (i.e. timeToMaturity = maturity - blockTime). Because the rates we're working with are annualized, the timeToMaturity parameter needs to instead be in the form of a "yearCount". Here is how you would transform the timeToMaturity parameter into the proper form to be used in the two above functions as written:

yearCount = timeToMaturity / YEAR

Look at how to simplify decimals usage

Also, I see that you use several different fixed point formats across the code: with 2, 8, 9 decimals. There are several issues with how these numbers are used:

  1. There are too many formats used. It is hard to know what format each value is, thus code is error-prone, as it is easy to use wrong number of decimals.
  2. Two decimals is too little. You use values with 2 decimals for percentages, so you may represent, say 2% and 3% , but not 2.5%. This is weird.
  3. You implement calculations with these numbers manually every time, so there are lots of code fragments like this: x.mul(RATE_PRECISION).div(y) or x.mul(y).div(RATE_PRECISION). Extracting such things to utility functions would make code simpler,. easier to read, less error-prone, and more efficient, as only one function call will be needed instead of two. Also, the implementation could be made more efficient and resistant to phantom overflows.

I would recommend to use the same format everywhere, say with 9 or 18 decimals, and implementing arithmetic functions for this format. This all for numbers stored in memory and on stack. You may still use shorter format for storage if needed to save storage space.

Actually, the variety of number formats used is quite large. There are signed and unsigned, integer and fractional, binary and decimal fixed point with 2, 5, 8, 9, 10, and 18 decimals, also different bit widths... All thses formats are converted to each other and computed manually, i.e. not through functions...

I agree that in storage it does make sense to use the smallest number of bits possible for each value, but this doesn't necessary mean, that all these formats should be also used in memory and on stack. Probably all the number should be unpacked to one of a few formats when read from the storage and packed back when written to it.

Inconsistency in underlying vs asset cash in liquidateCurrency

In liquidation, the benefit required figure that is calculated at the beginning of each branch is defined in underlying terms but it is not converted into asset terms before math is done with other values that are defined in asset terms. The root of the problem is that exchange rates between currencies are stored in underlying terms, but Notional's balances and the collateral/liquidation calculations are denominated in asset terms. This means that the issue is relatively simple in the local liquidation branches, but is more complicated in the cross-currency liquidation branch.

Liquidate Local Currency:
benefitRequired is defined in underlying terms here:

int256 benefitRequired =
factors
.localETHRate
.convertETHTo(factors.netETHValue.neg())
.mul(Constants.PERCENTAGE_DECIMALS)
.div(factors.localETHRate.buffer);

benefitRequired is then used in calculations with values that are defined in asset terms:

(w, benefitRequired) = _withdrawLocalLiquidityTokens(
nTokensToLiquidate = benefitRequired
.mul(balanceState.storedNTokenBalance)
.mul(int256(uint8(factors.nTokenParameters[Constants.PV_HAIRCUT_PERCENTAGE])))
.div(factors.nTokenValue.mul(haircutDiff));

The solution is to convert benefitRequired to assetCash terms before continuing on.

Liquidate Collateral Currency:
All the math that governs the calculation of collateralToRaise and localToPurchase is underlying terms, but the actual balances are both in asset terms so we need to convert in two places. First, convert collateralToRaise to asset terms after calculating it here:

collateralToRaise = benefitRequired.mul(Constants.PERCENTAGE_DECIMALS).div(denominator);

The issue is that we still have to calculate localToPurchase. That depends on exchange rates which means calculations in underlying terms. So when we call localToPurchase, we either need to transform collateralToRaise back in to underlying terms to pass it as an argument to the function, or we need to transform collateralToRaise to underlying terms within the localToPurchase function itself.

Whichever way, we need to convert localToPurchase into asset terms before we conduct this check:

if (localToPurchase > factors.localAvailable.neg()) {

Given that we will call localToPurchase again in this branch later on with a collateral figure that is in asset terms, I think it might be easiest to convert the collateral figure to underlying terms within the localToPurchase function. But either way, we need to be sure that both the collateral and local figures that localToPurhcase returns are in asset terms

Add all liquidity curve calculations

  1. Calculate traded exchange rate, calculate implied rate, cash etc
  2. Add convert to underlying for totalCurrentCash
  3. Calculate rateOracle update
  4. Calculate incentives
  5. Update market
  6. Update portfolio

Mistake in benefitRequired calculation in fCashLiquidation

I made a mistake in the spec. In the case where localAvailable < 0, the benefit required is simply -localAvailable. That will bring localAvailable to 0. We don't need to scale it by the buffer like we do. That would bring localAvailable over 0.

// If local available is negative then we can bring it up to zero
c.benefitRequired = c
.factors
.localAvailable
.neg()
.mul(Constants.PERCENTAGE_DECIMALS)
.div(c.factors.localETHRate.buffer);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.