Precision Loss Errors

Solidity computations can lead to some devious loss of precision errors

·

13 min read

Numerical operations in solidity can result in precision loss, where the amount that is calculated, saved and returned is incorrect and typically lower than it should be. These bugs can disadvantage users of decentralized finance platforms and they can also sometimes be used by attackers to drain funds from such platforms.

Division Before Multiplication

In Solidity division can result in rounding down errors, hence to minimize any rounding errors we always want to perform multiplication before division. Consider this simplified code from Numeon's code4rena contest:

import "openzeppelin-contracts/utils/math/Math.sol";

// source: https://code4rena.com/reports/2023-01-numoen#h-01-precision-loss-in-the-invariant-function-can-lead-to-loss-of-funds
error InvariantError();
function errorInvariant(uint amount0,
                        uint amount1,
                        uint liquidity,
                        uint token0Scale,
                        uint token1Scale) public view returns (uint256) {
    if (liquidity == 0) {
        require (amount0 == 0 && amount1 == 0);
        return 0;
    }

    // @audit: division can cause rounding so always want to do it last. Doing
    // multiplicationa after division as occurs here can cause precision loss
    uint256 scale0 = Math.mulDiv(amount0, 1e18, liquidity) * token0Scale;
    uint256 scale1 = Math.mulDiv(amount1, 1e18, liquidity) * token1Scale;

    console.log("Loss of precision: multiplication after division");
    console.log("scale0 : ", scale0);
    console.log("scale1 : ", scale1);

    uint upperBound = 5 * 1e18;
    if (scale1 > 2 * upperBound) revert InvariantError();

    uint256 a = scale0 * 1e18;
    uint256 b = scale1 * upperBound;
    uint256 c = (scale1 * scale1) / 4;
    uint256 d = upperBound * upperBound;

    console.log("a      : ", a);
    console.log("b      : ", b);
    console.log("c      : ", c);
    console.log("d      : ", d);

    return a + b - c - d;

Here scale0 & scale1 can be subject to significant loss of precision due to multiplication occurring after division which can go on to affect the subsequent calculations. To prevent this, always perform multiplication before division:

function correctInvariant(uint amount0,
                          uint amount1,
                          uint liquidity,
                          uint token0Scale,
                          uint token1Scale) public view returns (uint256) {
    if (liquidity == 0) {
        require (amount0 == 0 && amount1 == 0);
        return 0;
    }

    // @audit: changed to perform division after multiplication
    uint256 scale0 = Math.mulDiv(amount0 * token0Scale, 1e18, liquidity);
    uint256 scale1 = Math.mulDiv(amount1 * token1Scale, 1e18, liquidity);

    console.log("Prevent precision loss: multiplication before division");
    console.log("scale0 : ", scale0);
    console.log("scale1 : ", scale1);

    uint upperBound = 5 * 1e18;
    if (scale1 > 2 * upperBound) revert InvariantError();

    uint256 a = scale0 * 1e18;
    uint256 b = scale1 * upperBound;
    uint256 c = (scale1 * scale1) / 4;
    uint256 d = upperBound * upperBound;

    console.log("a      : ", a);
    console.log("b      : ", b);
    console.log("c      : ", c);
    console.log("d      : ", d);

    return a + b - c - d;
}

This code can be wrapped in a simple test harness to show exactly how the precision loss is occurring:

function testInvariantDivBeforeMult() public {
    uint token0Amount    = 1.5*10**6;
    uint token1Amount    = 2 * (5 * 10**24 - 10**21);
    uint liquidity       = 10**24;
    uint token0Precision = 10**12;
    uint token1Precision = 1;

    uint result = vulnContract.errorInvariant(
        token0Amount, token1Amount, liquidity, token0Precision, token1Precision);
    assertEq(0, result);

    result = vulnContract.correctInvariant(
        token0Amount, token1Amount, liquidity, token0Precision, token1Precision);
    assertEq(500000000000000000000000000000, result);
}

The test output shows a significant loss of precision in "scale0" which is carried into "a":

  Loss of precision: multiplication after division
  scale0 :  1000000000000
  scale1 :  9998000000000000000
  a      :  1000000000000000000000000000000
  b      :  49990000000000000000000000000000000000
  c      :  24990001000000000000000000000000000000
  d      :  25000000000000000000000000000000000000

  Prevent precision loss: multiplication before division
  scale0 :  1500000000000
  scale1 :  9998000000000000000
  a      :  1500000000000000000000000000000
  b      :  49990000000000000000000000000000000000
  c      :  24990001000000000000000000000000000000
  d      :  25000000000000000000000000000000000000

In the case of Numeon's audit, this precision loss in the invariant() function could have been used by an attacker to incorrectly satisfy the invariant check, allowing an attacker to drain funds from the contract.

Sometimes Division Before Multiplication errors can be hidden from auditors by function calls in the equation. One technique auditors can employ is to manually expand out the function calls, to reveal any hidden Division Before Multiplication. Consider this simplified finding from Yield VR's audit:

iRate = baseVbr + utilRate.wmul(slope1).wdiv(optimalUsageRate)
// expand out wmul & wdiv to see what is actually going on
// iRate = baseVbr + utilRate * (slope1 / 1e18) * (1e18 / optimalUsageRate)
//
// now can see Division Before Multiplication:
// (slope1 / 1e18) is then multiplied by (1e18 / optimalUsageRate),
// leading to precision loss.
//
// To fix, always perform Multiplication Before Division:
// iRate = baseVbr + utilRate * slope1 / optimalUsageRate;

Here wmul() and wdiv() were hiding the fact that there was Division Before Multiplication, but once the functions calls are expanded it becomes visible. Always remember: Multiplication Before Division!

More examples: [1, 2, 3, 4, 5, 6, 7, 8, 9]

Rounding Down To Zero

As we've previously seen, division in solidity can result in rounding down, and to minimize precision loss we always need to do multiplication before division. Yet even when we do this some precision loss can occur especially when dealing with small numbers, and rounding down to zero can be a source of major error if not handled correctly. Consider this simplified loan repayment code from Cooler's sherlock contest:

// source: https://github.com/sherlock-audit/2023-01-cooler-judging/issues/263
function errorRepay(uint repaid) external {
    console.log("PrecisionLoss.errorRepay()");
    // @audit if repaid small enough, decollateralized will round down to 0,
    // allowing loan to be repaid without changing collateral
    uint decollateralized = loanCollateral * repaid / loanAmount;

    loanAmount     -= repaid;
    loanCollateral -= decollateralized;

    console.log("decollateralized : ", decollateralized);
    console.log("loanAmount       : ", loanAmount);
    console.log("loanCollateral   : ", loanCollateral);
}

Here we have a loan that has been taken with some collateral, and a function to repay part or all of the loan. When repayment occurs, both the loan amount and the loan collateral must be proportionally reduced. However if repaying in small increments, "decollateralized" will round down to zero and hence the loan amount will be reduced without reducing the collateral. To fix this the function should revert if this would occur:

function correctRepay(uint repaid) external {
    uint decollateralized = loanCollateral * repaid / loanAmount;

    // @audit don't allow loan repayment without deducting from
    // collateral in the case of rounding to zero from small repayment
    if( decollateralized == 0 ) { revert("Round down to zero"); }

    loanAmount     -= repaid;
    loanCollateral -= decollateralized;
}

Verify it using a simple test harness:

function testDivRoundToZero() public {
    // borrow 10 USDC using 1 USDC as collateral
    uint loanAmount     = 10 * USDC_PRECISION;
    uint loanCollateral = 1  * USDC_PRECISION;

    vulnContract.setLoanAmount(loanAmount);
    vulnContract.setLoanCollateral(loanCollateral);

    // repay very small amount
    uint repayAmount    = 0.000009 * 10**6;

    vulnContract.errorRepay(repayAmount);

    // loan amount reduced but collateral stayed the same
    assertEq(loanAmount-repayAmount, vulnContract.loanAmount());
    assertEq(loanCollateral, vulnContract.loanCollateral());

    vm.expectRevert("Round down to zero");
    vulnContract.correctRepay(repayAmount);
}

The output from the test run shows the rounding to zero that occurs:

  PrecisionLoss.errorRepay()
  decollateralized :  0
  loanAmount       :  9999991
  loanCollateral   :  1000000

To prevent this error always consider if your computation may round down to zero, especially when using small numbers, and if so whether your code should revert. More examples: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

No Precision Scaling

Consider a trading pool that trades a primary token against a secondary token; these tokens could each have different precision. For example, DAI has 18 decimal places while USDC only has 6 decimal places; DAI/USDC or USDC/DAI are common stablecoin pools that allow market participants to trade between these two popular stablecoins.

Subtle loss of precision errors can occur if computation is done by combining the amounts of these two tokens with different precision, without first scaling the precision of the secondary token to the primary token's precision.

Examine this simplified code from Notional's sherlock audit, which I have simplified to be run by itself in a simple test harness with logging to see internally where the loss of precision occurs. This code attempts to calculate the value of LP (Liquidity Provider) tokens in the trading pool's primary token for a trading pool composed of 2 tokens:

// source: https://github.com/sherlock-audit/2023-02-notional/blob/main/leveraged-vaults/contracts/vaults/common/internal/pool/TwoTokenPoolUtils.sol#L67
// given a trading pool of two tokens & an amount of LP pool tokens,
// return the total value of the given LP tokens in the pool's primary token
function errorGetWeightedBalance(uint token1Amount,
                                 uint token1Precision,
                                 uint token2Amount,
                                 uint ,//token2Precision,
                                 uint poolTotalSupply,
                                 uint lpPoolTokens,
                                 uint lpPoolTokensPrecision,
                                 uint oraclePrice) external view returns (uint256 primaryAmount) {
    console.log("PrecisionLoss.errorGetWeightedBalance()");
    // Get shares of primary and secondary balances with the provided poolClaim
    uint256 primaryBalance   = token1Amount * lpPoolTokens / poolTotalSupply;
    uint256 secondaryBalance = token2Amount * lpPoolTokens / poolTotalSupply;

    console.log("primaryBalance           : ", primaryBalance);
    console.log("secondaryBalance         : ", secondaryBalance);

    // Value the secondary balance in terms of the primary token using the oraclePrice
    uint256 secondaryAmountInPrimary = secondaryBalance * lpPoolTokensPrecision / oraclePrice;
    console.log("secondaryAmountInPrimary : ", secondaryAmountInPrimary);

    // Make sure primaryAmount is reported in token1Precision
    // @audit (primaryBalance + secondaryAmountInPrimary)
    // primaryBalance & secondaryAmountInPrimary may not be denominated in
    // the same precision => they can't safely be added together without
    // first scaling the secondary token to match the primary token's precision
    primaryAmount = (primaryBalance + secondaryAmountInPrimary) * token1Precision / lpPoolTokensPrecision;
    console.log("primaryAmount            : ", primaryAmount);
}

(primaryBalance + secondaryAmountInPrimary) attempts to add together the balance of two tokens which may not have the same precision; this will result in a loss of precision. To prevent this defect, the secondary amount must first be scaled into the primary token's precision before further computation:

// @audit correct verison scales secondary token to match primary tokens' precision
// before performing further computation
function correctGetWeightedBalance(uint token1Amount,
                                   uint token1Precision,
                                   uint token2Amount,
                                   uint token2Precision,
                                   uint poolTotalSupply,
                                   uint lpPoolTokens,
                                   uint lpPoolTokensPrecision,
                                   uint oraclePrice) external view returns (uint256 primaryAmount) {  
    console.log("PrecisionLoss.correctGetWeightedBalance()");
    // Get shares of primary and secondary balances with the provided poolClaim
    uint256 primaryBalance   = token1Amount * lpPoolTokens / poolTotalSupply;
    uint256 secondaryBalance = token2Amount * lpPoolTokens / poolTotalSupply;

    // @audit scale secondary token amount to first token's precision prior to any computation
    secondaryBalance = secondaryBalance * token1Precision / token2Precision;   

    console.log("primaryBalance           : ", primaryBalance);
    console.log("secondaryBalance         : ", secondaryBalance);

    // Value the secondary balance in terms of the primary token using the oraclePrice
    uint256 secondaryAmountInPrimary = secondaryBalance * lpPoolTokensPrecision / oraclePrice;
    console.log("secondaryAmountInPrimary : ", secondaryAmountInPrimary);

    // Make sure primaryAmount is reported in token1Precision
    primaryAmount = primaryBalance + secondaryAmountInPrimary;
    console.log("primaryAmount            : ", primaryAmount);
}

Here is a test harness that runs both the erroneous version and the correct version for DAI/USDC & USDC/DAI pools:

// given a pool of two tokens with different precision, calculate
// value of given amount of LP tokens in the pool's primary token
function testWeightedPoolBalanceDiffPrecision() public {
    uint DAI_PRECISION         = 10**18;
    uint USDC_PRECISION        = 10**6;

    uint token1Precision       = DAI_PRECISION;
    uint token2Precision       = USDC_PRECISION;
    uint token1Amount          = 100 * token1Precision;
    uint token2Amount          = 100 * token2Precision;
    uint poolTotalSupply       = 100;
    uint lpPoolTokens          = 50;
    uint lpPoolTokensPrecision = DAI_PRECISION;
    uint oraclePrice           = 1 * DAI_PRECISION;

    // first test: DAI/USDC pool
    uint result = vulnContract.errorGetWeightedBalance(
        token1Amount, token1Precision, token2Amount, token2Precision, 
        poolTotalSupply, lpPoolTokens, lpPoolTokensPrecision, oraclePrice );
    assertEq(50000000000050000000, result);    

    result = vulnContract.correctGetWeightedBalance(
        token1Amount, token1Precision, token2Amount, token2Precision, 
        poolTotalSupply, lpPoolTokens, lpPoolTokensPrecision, oraclePrice );
    assertEq(100000000000000000000, result);

    // second test: USDC/DAI pool
    result = vulnContract.errorGetWeightedBalance(
        token2Amount, token2Precision, token1Amount, token1Precision, 
        poolTotalSupply, lpPoolTokens, lpPoolTokensPrecision, oraclePrice );
    assertEq(50000000, result);

    result = vulnContract.correctGetWeightedBalance(
        token2Amount, token2Precision, token1Amount, token1Precision, 
        poolTotalSupply, lpPoolTokens, lpPoolTokensPrecision, oraclePrice );
    assertEq(100000000, result);
}

Examining the output shows the erroneous version dramatically undervalues the user's LP tokens by approximately -50%:

PrecisionLoss.errorGetWeightedBalance() (DAI/USDC)
primaryBalance           :  50000000000000000000
secondaryBalance         :  50000000
secondaryAmountInPrimary :  50000000
primaryAmount            :  50000000000050000000
PrecisionLoss.correctGetWeightedBalance() (DAI/USDC)
primaryBalance           :  50000000000000000000
secondaryBalance         :  50000000000000000000
secondaryAmountInPrimary :  50000000000000000000
primaryAmount            :  100000000000000000000

PrecisionLoss.errorGetWeightedBalance() (USDC/DAI)
primaryBalance           :  50000000
secondaryBalance         :  50000000000000000000
secondaryAmountInPrimary :  50000000000000000000
primaryAmount            :  50000000
PrecisionLoss.correctGetWeightedBalance() (USDC/DAI)
primaryBalance           :  50000000
secondaryBalance         :  50000000
secondaryAmountInPrimary :  50000000
primaryAmount            :  100000000

When combining amounts of multiple tokens that may have different precision, we must always take care to convert all of the amounts into the primary token's precision before any computation. More examples: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

Excessive Precision Scaling

In the previous section we learned that token amounts can have different precisions and hence it is very important to scale them to the same precision before performing calculations. In an effort to do this, smart contracts can excessively scale already-scaled tokens causing token amounts to become excessively inflated. Examine this finding from Notional's sherlock audit.

Auditors can find this type of vulnerability by tracing through code paths that handle token amounts and seeing if token amounts are repeatedly scaled when there should be no need for further scaling, paying special attention to larger modular codebases where different sub-components may be called to perform a function, and may incorrectly re-scale an already-scaled amount.

Mismatched Precision Scaling

Often larger codebases are created by multiple developers, and different modules within a larger codebase may have slight quirks when attempting to scale precision. One module may scale precision by a token's decimals, while another module may hard-code a common value such as 1e18; this mismatched precision scaling can create subtle precision loss errors when using tokens with a different precision to the hard-coded value. Consider this code [1, 2] from Yearn's code4rena audit:

// @audit Vault.vy; vault precision using token's decimals
decimals: uint256 = DetailedERC20(token).decimals()
self.decimals     = decimals
/// ...
def pricePerShare() -> uint256:
    return self._shareValue(10 ** self.decimals)

// @audit YearnYield; yield precision using hard-coded 1e18
function getTokensForShares(uint256 shares, address asset) public view override returns (uint256 amount) {
    if (shares == 0) return 0;
    // @audit should divided by vaultDecimals 
    amount = IyVault(liquidityToken[asset]).getPricePerFullShare().mul(shares).div(1e18);
}

Auditors should check that all modules of larger codebases are using the same precision scaling and be alert that hard-coded precision values in one module may conflict with dynamic precision values in another module, for tokens whose precision does not match the hard-coded value. More examples: [1, 2, 3, 4, 5, 6, 7, 8, 9]

Downcast Overflow

When downcasting from one type to another, Solidity will not revert but overflow, resulting in unexpected behavior and exploitable bugs. Auditors should look out for code patterns where require() checks occur before the downcast; these checks may pass before the downcast but fail after due to downcast overflow. Consider this simplified code from @gpersoon's balancer bug bounty:

function errorDowncast(uint sTimeU256, uint eTimeU256)  
  external view returns (uint sFromU32, uint eFromU32) {
    // checks before downcast conversions may not be true
    // after the downcast if overflow occurs
    require(eTimeU256 > sTimeU256, "End Time must > startTime ");

    uint32 sTimeU32 = uint32(sTimeU256);
    uint32 eTimeU32 = uint32(eTimeU256); // overflow for >= 2 ** 32

    console.log("sTimeU256 : ", sTimeU256);
    console.log("eTimeU256 : ", eTimeU256);
    console.log("sTimeU32  : ", sTimeU32);
    console.log("eTimeU32  : ", eTimeU32); // 0 for 2 ** 32

    return (uint(sTimeU32), uint(eTimeU32));
}

We have a function that is given a startTime & endTime in unsigned 256 bits. First it verifies the invariant (endTime > startTime), then it downcasts them both to unsigned 32 bits. In smart contracts this can often occur to due storage packing, where downcasts will be used to pack multiple variables into one storage slot.

The endTime will overflow for values >= 2 ** 32, resulting in the invariant check becoming invalid after the downcast. In Balancer and other contracts this corrupted state would be stored to storage to be read and subsequently exploited later on. When downcasting developers should consider using OpenZeppelin's SafeCast library which reverts if downcasting would overflow:

function correctDowncast(uint sTimeU256, uint eTimeU256)  
  external view returns (uint sFromU32, uint eFromU32) {
    require(eTimeU256 > sTimeU256, "End Time must > startTime ");

    uint32 sTimeU32 = SafeCast.toUint32(sTimeU256);
    uint32 eTimeU32 = SafeCast.toUint32(eTimeU256);

    console.log("sTimeU256 : ", sTimeU256);
    console.log("eTimeU256 : ", eTimeU256);
    console.log("sTimeU32  : ", sTimeU32);
    console.log("eTimeU32  : ", eTimeU32);

    return (uint(sTimeU32), uint(eTimeU32));
}

Wrapping this in a simple test harness we can verify both the erroneous & correct versions behave as expected:

function testDowncast() public {
    uint sTimeU256 = 2 ** 31;
    uint eTimeU256 = 2 ** 32;

    assert(sTimeU256 < eTimeU256);

    (uint sFromU32, uint eFromU32) = vulnContract.errorDowncast(sTimeU256, eTimeU256);
    assert(sFromU32 > eFromU32);

    vm.expectRevert("SafeCast: value doesn't fit in 32 bits");
    vulnContract.correctDowncast(sTimeU256, eTimeU256);
}

And examine the output to see exactly how downcast overflows:

sTimeU256 :  2147483648
eTimeU256 :  4294967296
sTimeU32  :  2147483648
eTimeU32  :  0

Auditors should look out for code that uses unsafe downcasting, especially where require() checks and other invariants are verified before downcasting occurs, as those invariants may no longer hold true due to precision loss from downcasting overflows. More examples: [1, 2, 3, 4]

Rounding Leaks Value From Protocol

In Automated Market Making (and similar) protocols, rounding on buying, selling & protocol fee calculations should always favor the protocol, to prevent leaking value from the system to traders. Consider this medium precision loss finding from Cyrfin's SudoSwap audit:

// @audit rounding down favors traders, can leak value from protocol
protocolFee = outputValue.mulWadDown(protocolFeeMultiplier);
// fixed: round up to favor protocol and prevent value leak to traders
protocolFee = outputValue.mulWadUp(protocolFeeMultiplier);

// @audit rounding down favors traders, can leak value from protocol
tradeFee = outputValue.mulWadDown(feeMultiplier);
// fixed: round up to favor protocol and prevent value leak to traders
tradeFee = outputValue.mulWadUp(feeMultiplier);

protocolFee & tradeFee were originally rounding down which would result in the system leaking value over time to traders who would be paying lower fees than they should. This was fixed after the audit by rounding up protocolFee & tradeFee which benefits the protocol, preventing value from leaking from the system to the traders.