A Liquidation System Redesign: A Pre-MIP Discussion

It does, but liquidations should not rely on OSM if they don’t have to. Liquidations not relying on it also makes such an attack less useful, because the payoff of pulling it off would be in getting to buy collateral below market price. So not using OSM here makes OSM safer.

This is definitely not anything we should play with. Smoothing like this can reduce losses often and catastrophically increase them in larger negative events. Exactly the opposite of what we want! Predictable small losses can be countered with fees, it is the unpredictable large losses that are the real problem which MakerDAO needs to try to minimize.


I understand that we don’t want to rely on the OSM if we don’t have to, but it seems like much of the difficulties around using this dutch auction style (which in most ways is a dramatic improvement over the previous auction style) is coming from our attempts to avoid trusting the OSM. I guess my point is more that maybe we can’t really avoid trusting the OSM, or at least the most robust way of accomplishing what we want to do is by trusting the OSM and then just doubling down on the security there.

This way we don’t have to choose lots of arbitrary values for where to start the auction, and make lots of weird assumptions that are likely to lead to different harder to predict problems.


I think it’s great to see discussion about the risks of the OSM, but in my opinion rather than zooming in on the auctions, I think it is a much better direction to discuss how to improve the OSM and make it more secure, and I already see a lot of great suggestions on the topic.

On that note people here might find it relevant to take a look the MIP1 problem spaces that relate to the OSM, such as Decentralized Oracle Freeze and Emergency Oracles.

Focusing specifically on trying to make auctions resistant to OSM failure misses the mark, because there are already other aspects of the protocol that critically rely on, and assume, that the OSM will always report true values.

Most importantly is triggering liquidations, and calculating emergency shutdown settlement. Both of these result in catastrophic failure if the OSM fails - if Maker liquidated all of its collateral simultaneously in a fire sale due to a failure of the OSM, it wouldn’t really matter if the auctions had a correct starting price, it still wouldn’t be much different from selling everything at 0, and obviously it’s an outcome that is completely unacceptable to users no matter what. So it has to be prevented from happening in the first place by governance.

Same goes for emergency shutdown, if the OSM failed, an emergency shutdown could be manipulated so that either all Dai holders, or all Vault holders would lose everything and the other side would gain twice as much in the settlement. Again an outcome that would be completely unacceptable if it was even remotely a possibility in the long run, and has to be prevented by governance for the Maker protocol to be viable.

So my point is, if we’re worried about failure of the OSM, I think the solution is to try to improve the OSM and make it more robust, not spend precious time on trying to come up with a perfect auction auction design that’s resistant to the edge case of OSM failure, while at the same time ignoring the other critical, unacceptable outcomes that would happen as a result of an OSM failure.

The smarter approach is to tackle the two issues separately to avoid getting stuck, and finish the auction design first without focusing on the OSM failure edge case. Because getting it done would then create space for the community to move on to new features that would address the OSM issue directly, and for all aspects of the system - not just auctions - such as the suggestions already posted here, or another example would be things such as the Decentralized Oracle Freeze and Emergency Oracles mentioned above (both are MIP1 problem spaces so you if you want to learn more about them you can check out MIP1).

A final, but important point to add, is that the auction system, like everything else, can always be upgraded later with more sophisticated and complicated logic. Given the amount of work that still needs to be done to prepare Maker for self-sustainability, I don’t think that now would be the right time to try to build a perfect solution that would never need to be revisited.

So even if it hypothetically is possible to create a solution that mitigates impact of OSM failure for auctions, and does so without compromising economic efficiency, there is still no reason to focus all energy on having that implemented from day 1. Instead it can be done in the future when other, more pressing issues are dealt with.

Thoughtful prioritization will be critical if this community is to be successful in holistically navigating all the risks that the protocol faces.


I generally agree with Rune’s point that we should focus on improving the OSM/oracles separately from implementing the new auctions, given that in a catastrophic “prices set to zero” OSM attack, the system is likely sustaining losses of the same order of magnitude whether the Dutch Auctions have good initial price discovery or just use the OSM value.

That said, it doesn’t mean we shouldn’t have any countermeasures in the new auctions themselves, because the risks of a Black Thursday-like market fluctuation are very real and quite similar to the risks of an OSM attack (i.e. a difference of degree, not kind). Of course the special features of Dutch Auctions are supposed to help lessen those risks.

I’ve been gaming different scenarios and mitigations out in my head most of today, and every mitigation seems to be a mixed bag–some are only good in certain situations but bad in others, and some introduce new vulnerabilities.

We’ll be doing economic simulations as part of the testing process for the new auctions; these can hopefully shed some light whether a simple initial price determination will suffice.


Hi! I made a separate thread on Batch auctions for liquidations. It also includes our (Gnosis’) experience with the DutchX, a DEX using the dutch auction model that we built in the past.


Am I missing something or do Dutch auctions imply a single flip phase (tend) and thus the liquidated CDP might not receive any collateral back (lack of dent)?

Not really. In a first phase, the amount of Dai to be raised could remain constant (equaling the debt to be covered + liquidation penalty), and the amount of collateral being given in return would increase until the point where all the collateral in the vault is up for sale. From there, the amount of Dai would be decreased (and the system would be making a loss once it has eaten through the liquidation penalty.) The price per unit collateral would in any case follow the price curve, for example a linear decline.

@wouter So just to make sure I understand.

You start the auction with X amount of shortfall (DAI) to cover (raise) where X = debtToCover + penalty. You also start with an amount of collateral to sell Y that is proportional to debtToCover.

You then constantly increase the amount to sell until in hits the total amount locked in the vault. After you hit the ceiling, you decrease the DAI amount to cover with the auction.

All of this is done smoothly using a DAI/COL price curve. DAI/COL may start at p% of the OSM price and from there everything is adjusted by the curve.

Did I miss anything?

EDIT: In this model you can’t really guarantee only a penalty percent loss for vault users because the DAI/COL offered to bidders (and thus the loss incurred by vaults) always changes according to the curve.

Folks a quick note regarding an some thoughts on the liquidation system that led to an idea.

A quick example that illustrates the idea:

Lets say OSM is at $200USD/ETH (just for easy calculation) and a new vault owner opens with 300ETH a vault for 20K DAI. So their CR on inception is 300%. All fine and dandy. Now OSM drops to $100 putting them at the liquidation point.

So now they come under liquidation gun here. The system as it is currently designed liquidates everything returning any collateral that comes back from phase 2 to the vault. But lets change a bit how this vault liquidates into a sequence of liquidations of some fixed amount of ETH. Say 50ETH at a time.

So this vault needs to recover its tab + .13 x tab = 1.13*20000=22600 DAI so if we were to offer this vaults 50ETH lot up for auction 50ETH would have to fetch 3333.33 (borrowed) + .13 * 3333.33 (433.33) profit to the system. Lets say phase 1 gets a bid that covers the full tab (3766.66) and phase 2 gets a bid of 40 ETH.

So what happens to the vault.

  1. DAI owed would drop from 20000 - 3333.33 = 16666.66 DAI owed.
  2. ETH backing the above DAI would become 250+10 = 260.
  3. The new vault liquidation price becomes 16666.66*1.6/260 = 96.15 and if the OSM never drops below 100 this vault only loses 40ETH and remains intact.

What the above scenario illustrates is that there may be circumstances where if liquidations are done sequentially on a vault that IF ANY collateral is returned from such an auction it will improve the vaults CR.

There are a few ideas one can take away from this:

  1. Liquidating a vault in a sequence of smaller liquidations could save the rest of collateral in a vault.
  2. If a vault is saved from further liquidations this may stop a sequence of cascading liquidations.
  3. DAI profit from the first liquidation could be used to also temporarily buffer this vault from further liquidations. Even if we used 1/2 of the 433.33 DAI profit this would drop the Liquidation point by 1% in this case.
  4. borrow facilities that have a higher LR and lower LF and hence have a higher chance at returning collateral in the above model of sequential liquidations would actually have a much better chance of not liquidating entire vaults.
  5. If these sequential liquidations happen over time it might buy enough time for a vault owner to either add more collateral, or add more DAI.

The issues with this in terms of code.

  1. Not as clean as liquidating the entire vault.
  2. Vault status when in a sequence of liquidations is more difficult to track.
  3. Liquidating sequentially as above may expose the system to more risk.

My point here is that it isn’t just fixing the method of auction here, but actually an adjustment to how a vaults collateral is liquidated actually could have a positive improvement on the system in terms of reducing the need to liquidate, or put better using a sequence of liquidations to try to recover collateral for vault holders so their entire vaults don’t need to be liquidated. In principle the system could also use the liquidation fee DAI (at least 1/2 of it) to act as an additional temporary collateralization buffer for the vault to buy time or collateralization to possibly stop additional collateral from being liquidated.

This becomes particularly true if market prices are recovering.

Anyway. I know doing anything to implement the above is probably a ‘big deal’ from a smart contract design but it was something I wanted to point out when I started looking at this from a how can we improve the liquidation system generally.

One last thing that came up here is the idea of having a flag operationally when auctions DO NOT return any collateral to a vault as this one kind of event shows that either the markets are really stressed pricewise or the auction system is not performing properly. It is a reason to throttle liquidations somewhat particularly if they are not recovering:

  1. Any vault collateral
  2. The 13% liquidation fee
  3. System losing money because auctions are not even fething the 100% borrow tab.

If the auction doesn’t recover any collateral, it typically means the ETH price is falling very fast (faster than the auction duration). If you throttle liquidations for this reason, it just means MKR holder absorb bigger losses (assuming ETH price does not bounce back quickly). Vault holder need to understand there is no guarantee that they will receive collateral back in the even of a liquidation.

I see here that liquidation system 2.0 is in plan. But there is no link to the new system discussion. Is the system described at this post is the 2.0?

Yes, this one is the 2.0 discussion.

This blog post includes an interview with two DeFi players who describes why currently they are not being keepers in lending platforms, and their thoughts on liquidations processes in general. (Full disclosure, it is also a self promotion to my project, but i think this forum could benefit from having the point of view of the potential keepers in writing)


What do you use to run these simulations, Gauntlet? I only ask because I’m currently working on a simulation platform for the Maker Protocol and want to know how we can be of any help!


Yes, it seems that Gauntlet is doing the simulations:

1 Like

mm i see, thanks for finding this post for me! i’ll check out the video linked in there.
would you happen to know: Gauntlet’s sims aren’t open-source, right?
i doubt our platform will be (anywhere near) as robust as Gauntlet’s, but we’re hoping it’ll at least serve as interesting open-source tool for the community to experiment with!

1 Like

Given that Gauntlet is a for-profit company, I expect not. But they might e.g. make the model available in a SaaS format so that it can be run with different inputs.

My 2 cents is that having multiple tools is good–Gauntlet will give great depth on simulation realism (e.g. being able to address even Ethereum network congestion), but something less detailed but easier/quicker to experiment with+open could still add a lot of value.

Btw there’s also a high-level model of the system written in K:

1 Like