A Liquidation System Redesign: A Pre-MIP Discussion


Introduction

This forum post aims to serve as a “pre-MIP” discussion, where the Maker Community and the Smart Contracts Domain Team can discuss and provide input on the proposed solution for a redesign of the liquidation system of the Maker Protocol.

The main goal of this “pre-MIP” forum post is to introduce a proposed solution for a liquidation system redesign and gather feedback in preparation for the creation of a formal Maker Improvement Proposal (MIP), which will contain more implementation details. The MIP itself will be proposed as a MIP2 proposal, which means that the Feedback Period and Frozen Period defined in MIP0 are ignored for both MIPs and Subproposals and are instead determined by the MIP Author(s). In general, in order for proposals to qualify as a MIP2 proposal, it must address one of the issues listed in MIP1.

Initial Research and Findings

The process behind the proposed solution design initially began with looking at many different options to determine which may be optimal for the Maker Protocol. More specifically, we considered the following classical auction solutions:

  • English Auctions
  • Reverse English Auctions
  • Sealed/Batched Bid Auctions
  • Multiphase Auctions
  • Dutch Auctions
  • Reverse Dutch Auctions

After research and design consideration, the Smart Contracts Domain Team believes the Dutch auction system, based on its merits (further described below), is likely the most suitable solution for a redesign of the liquidations system. The Dutch auction system has been validated in the market and has proven to work for massive liquidation events, such as Set Protocol’s use of Dutch auctions for rebalancing. Set Protocol’s implementation can be found here and is also described on page 24 of their whitepaper.

In terms of this proposed solution design, the sections below address the motivation behind the system redesign, the design considerations, and the details regarding the requirements of the solution:

Motivation

Our ongoing analysis, further focused by the events of March 12th, 2020 (“Black Thursday”), developed several opportunities to improve the Maker Protocol’s collateral auction mechanism. Specifically, the following list covers the areas of improvement that can be considered when designing a new liquidation system for the Maker Protocol:

  • Reducing the reliance on DAI liquidity
    • Today, Keepers require sufficient DAI before making a bid, and that DAI is locked until that Keeper is outbid; if the Keeper’s bid wins, it must then find a way to recycle that purchased collateral back into DAI in an efficient manner.
    • Since multiple Keepers are necessary to ensure competitive (and thus, efficient) auctions, there is a multiplier effect on the DAI required. For every 1 DAI of debt sought by auction, there needs to be at least 2 DAI of liquidity.
    • The need to lock DAI for multiple blocks after a bid is submitted means that flash loans cannot be used to ease liquidity requirements.
  • Reducing the likelihood of auctions settling far from the market price
    • Currently, an arbitrarily low bid can be made immediately after an auction starts. If other participants are not able to outbid the initial bid, due to liquidity constraints (above) or network congestion, the auction is won at the initial low bid price.
  • Reducing the barriers to entry
    • Participating in auctions: (1) is capital intensive; (2) involves significant time exposure to collateral and DAI price; and (3) requires substantial technical sophistication.
    • As MCD scales with more collateral types, the infrastructure cost and liquidity requirements of running Keepers for all collateral types increases. Therefore, the Maker Community should seek to hold constant or reduce both liquidity requirements and infrastructure costs as MCD scales.

Design Considerations for a New Liquidation System

With the above in mind, the Smart Contracts Domain Team put together a list of desirable features for the new liquidation system of the Maker Protocol.

  • Single Block Composability
    • Decentralized Exchange (DEX) Compatibility - This would allow Keepers to use their liquidity to purchase collateral and instantly cycle it back into DAI through a DEX or DEX aggregator. This method would lower the friction of recycling capital for Keepers and also mitigate the risk inherent with Keepers committing to a bid for some period of time.
    • Automatic auction Bidding with DEX Aggregators - It should be possible for open auctions to show up in DEX aggregators, allowing regular users to buy directly from active auctions (if they have the best price available on the market). This function expands the pool of auction participants beyond just Keepers, likely increasing liquidity and resilience for the system.
    • Flash Loan Compatibility - Flash loans can be used to augment liquidity by performing atomic arbitrage. Ideally, this means that, for some set of bidding strategies, Keepers would need only provide the gas costs to execute a flash loan. However, this should not exclude Keepers from providing liquidity if they choose to do so. This function could provide a significant advantage as it may alleviate the capital requirements for auction participation and permit Keepers to employ such strategies to mitigate the operational security risk of holding funds in hot wallets.
  • Allowing Partial Bids - A Keeper could buy a portion of the collateral in an ongoing auction, even if they lack the capital to purchase the full amount. This functionality would open the door to smaller players to participate in auctions more readily. It also likely simplifies the liquidator contract (Cat), which would no longer need to perform partial liquidations.
  • Protection From Low Bids - The new liquidation system should contain logic that prevents Keepers from making bids that are unreasonably far from prevailing market prices.
  • Relying Only On Prices From the Oracle Security Module (OSM) - The new liquidation system, like the existing one, should be oracle risk-minimized.

Proposed Solution Design

The Smart Contracts Domain Team believes, given the design considerations above, that Dutch auctions are likely the optimal design solution. Dutch auctions, in which a high starting price is set, and then decreases deterministically over time, can address most of the design considerations discussed above and likely make the Maker Protocol more robust.

  • Single Block Composability
    • Decentralized Exchange (DEX) Compatibility - Since Dutch auctions settle instantly, Keepers could use their liquidity to purchase collateral and instantly cycle it back into DAI through a DEX or DEX aggregator.
    • Automatic auction Bidding with DEX Aggregators - Again, since Dutch auctions settle instantly, open auctions can show up in DEX aggregators allowing regular users to buy directly from active auctions.
    • Flash Loan Compatibility - The instant settlement of dutch auctions also enables the use of flash loans.
  • Allowing Partial Bids - A Dutch auction would permit Keepers to buy a portion of the collateral in an ongoing auction, even if they lack the capital to purchase the full amount.
  • Protection From Low Bids - Since Dutch auction prices start high and decrease deterministically, extremely low bids are not generally possible unless sufficient time passes (and auctions could be configured to reset before reaching arbitrarily low prices).
  • Relying Only On Prices From the Oracle Security Module (OSM) - A Dutch auction would rely on the OSM to liquidate undercollateralized Vaults and establish an initial price of the auctions.

31 Likes

So a non-technical guy like me could just tap into anything using 0x api, 1inch, etc to participate?

3 Likes

Yeah, the Set Protocol is a great example of Dutch Auctions working well.

Great writeup on the liquidation system redesign.

I guess the only thing I want to add is that it’s important that the auction doesn’t reduce the price too aggressively. It can be important to give market participants time to short DAI (these might be different then auction participants).

Good that you’re considering protection from low bids, failsafe from arbitrarily low prices.

It might be good to investigating possible variable auction parameters depending on liquidation size. Basically lengthening the auction process for big liquidations [less aggressive price decrease per block]. This throttle effect is important for black swan events.

5 Likes

Exactly. It would be easy for aggregators like 1inch, dex_ag and others to directly tap into this system.

1 Like

1000 times yes please!

3 Likes

This all sounds great, but I’m curious as to why the current system was first chosen and what the tradeoffs would be between the two. I’m guessing it’s not all positive changes by switching.

1 Like

Very timely comment. I’ll make sure this idea is considered when foundation technical teams discuss this tomorrow. Also up for discussion tomorrow is how we can easily present all auctions to integrators on-chain. One possible option was an enumerable set, but that has some limitations when we have too many liquidations. One solution to that was to make sure auctions were always ordered and let the integrator iterate on their own. Extending the auction time for large liquidations would change the ordering, but we may yet find a better solution to the former problem.

3 Likes

@cmooney Maybe too complicated, kicking multiple vaults together as a single auction?

Also regarding the variable parameters, it would probably be better to just have variable parameters for big liquidations. It should be easier to track if most auctions have the same parameters, price decreasing at the same rate.

This is a good question, but requires a very detailed response. We’ve been making many changes (low-hanging-fruit) to the liquidation system over the past few months. Tuning some of the governance parameters, working up some fixes for double liquidity requirements and blocked flop auctions, but we have also put a bunch of work into the auction-keeper reference implementation and the dockerized-auction-keeper. We’ve exhausted most of the low-hanging-fruit, and when we start to consider medium to long term fixes to liquidations, we realize we’ve entered a realm of diminishing returns.

What’s more, when we consider all of the design objectives we would have today, the strengths of single block composability in DeFi, and partial liquidations mixed with shortcomings in the current design, it becomes clear that the ethereum ecosystem has changed since many of the original design decisions for the existing system were made.

So far, and I can say this honestly, I can name no existing feature of the current design that the new design does not also include. That is, up to this point, it would appear as though a dutch auction system is strictly better in all regards. That is not to say we won’t find some trade-off in the coming weeks or months of design, but that is also why we are engaging the community now: we want to hear new ideas and discover pitfalls before we’re too far along.

EDIT: I forgot about front-running attacks. In a dutch auction design where a keeper brings no liquidity to the table (flash loan), any bid can be front-run by an attacker with more gas. There are some ways to make this more difficult, but we have no solution to absolutely stop this yet.

3 Likes

Aggregating liquidations is under consideration, but it does indeed as some additional complexity. As would having different parameters for large liquidations vs. smaller ones. We have to consider these options and weigh them against other solutions and how they hold up when considering all of the various risks and complexities.

1 Like

I’d add that the ability to do partial bids that comes with Dutch auctions makes it so that there’s no real difference between one big auction and a bunch of smaller auctions kicked off around the same time. So IMO having different parameters for large auctions doesn’t really make sense.

@Kurt_Barry True. The point on throttling still stands. Could have a global throttle that can slow down ALL auctions or all new auctions after a trigger.

1 Like

If we had one big auction with a sell price at a single value, I could see a situation where the price of ETH is falling faster than the auction sell price is lowering, causing minimal bids to come in.

By having multiple auctions (especially with variable price lowering rates) it means that auction bids should continue to trickle in irrespective of the movement of the ETH price.

Agreed, limits on number of auctions and amount of debt to be covered are both design options under consideration.

1 Like

In a situation in which the market price is dropping faster than the auction price, with Dutch auctions “minimal bids” aren’t a concern–you’re constrained to buy at the current price of the auction. What would happen in that situation is no one would bid at all. If the price in the auction never catches up to the market, that is a real risk as the debt won’t be covered. Mitigations like re-initialization based on the current OSM price can help to correct this, but ultimately, if a collateral is going to zero, the system is likely to sustain losses no matter what. But it is an important point that dropping the price too slowly has risks just like dropping the price too quickly.

3 Likes

What I had meant to describe by “minimal bids” is the situation that you’re describing where no bids would come in due to the auction not catching up to the market. Sorry if I wasn’t clear in my original post.

I agree that protecting against this situation is critical to maximise returns from collateral auctions. Because of this, I think that having a single auction price is risky. Intuitively, I would say that small packets of collateral should have their auction price drop quickly and larger packets should have their price drop quickly. I believe this would ensure that no matter the movement of the market, there should always be a trickle (or steady stream) of collateral being sold.

I think that a single price for all collateral is too difficult to optimize to behave desirably in all situations.

Would the idea here be we would start the bidding at the OSM price, or would we take the OSM price and multiply it by some auction premium?

In practice you probably want some adjustable multiplicative factor (e.g. 1.15) that you multiply the OSM price by to compute the auction start price, but the point is that the input to the calculation is just the OSM (1 hour delayed) price, not the instantaneous feed price. Using the latter would significantly increase the risk from oracle attacks.

I suppose that under this system once someone is liquidated they would not have any collateral returned to their vault as we currently do during the dent phase today? Basically the auction would always liquidate the whole lot.

Also who makes the profit if the collateral is sold for more than the OSM price? The protocol? The Vault owner?

2 Likes

There isn’t “a single price for all collateral”: the price for any given chunk of liquidated collateral is a function of how long ago it was liquidated and what the OSM price was at the time of liquidation. So naturally you’re going to get a range of prices. Note that unless the price decrease function is very trivial (linear with fixed slope), the rates of decrease will differ between auctions anyway.

I don’t see any argument for why the rate of decrease should depend on the amount of collateral in a particular auction in a world with partial bids. Why should 100 auctions of 10 ETH (started simultaneously) behave differently from 1 auction of 1000 ETH? A fast-dropping market price is certainly a scenario requiring careful thought and probably some mitigation strategies. Something along the lines of what you suggest that makes more sense to me would be when a liquidation occurs, the collateral is split up into tranches which change in price at different rates. This way the proportion of collateral in each of the “price decrease rate” buckets can be controlled in some reasonable way and is not randomly determined by the sizes of the Vaults that are liquidated. I’m unsure if that would give any extra robustness, and even if it did, whether it would be worth the increased complexity.