A Liquidation System Redesign: A Pre-MIP Discussion

Something like Uniswap v2’s on-chain price oracle solution? I believe it allows you to get the average price over a timescale of your choice (not entirely sure how it works.) Something like taking the price average over the last week or month, and then adding a fixed percentage is my first thought for initial price determination.

Rate limiting also seems like a great idea.

Is your main concern with the price oracle attack is system stability or fairness towards user? (I assume both are concerns, but still).

2 Likes

Welcome @yaronvel!

I’m assuming that ensuring vaults’ debts are covered (avoiding bad debt) will be the top priority, but optimizing for the best possible price (and therefore the greatest possible return of collateral to liquidated vault owners) will also be an important consideration.

Vaults have 1 hour notice before being liquidated due to the oracle security module delaying price updates, so hopefully liquidations will only happen rarely. With a liquidation penalty of 13%, users should always get more collateral back by liquidating their own position (potentially using a flash loan or automation service like defisaver) versus letting their vault go to auction.

1 Like

Had another thought about this. It seems like the greatest risk with the dutch auction system is an oracle attack that feeds in an unreasonably low price - causes lots of liquidations and the starting price could potentially be far below market. Maybe this risk could be mitigated by limiting the rate of change of the OSM price itself.

E.g. Revise the oracle system to only permit next price to change by a maximum factor of 2x per hour since the last update. For any median price outside of these bounds, the OSM updates the next price to the minimum (last price * (50% ^ time since last update)) or maximum (last price * (200% ^ time since last update)) amount permitted instead of the fed price.

If an oracle attack feeds too low of a price, a rate of price change feature would limit the damage. Side benefit is that this feature could help with oracle attacks feeding too high of a price as well to purposefully generate bad debt.

Drawback is that if a collateral price fell faster than the rate limiter, it could lead to some bad debt accumulating. If a price ever updates limit down, maybe that could be used to trigger an automatic drop of that asset’s DC to 0.

1 Like

@monet-supply hmmm yeah a tweak to the oracle system like that could be a good idea. Depending on exactly how it’s implemented, it could have the side effect of significantly slowing liquidations during a crash, and also if the price recovers quickly, avoid some liquidations entirely. EDIT: of course, it’s not strictly necessary to implement this in the OSM itself, you can track it in the auction instead (probably by having the Spotter poke the auction contract as well).

One other thing that’s been suggested elsewhere but not mentioned in this thread is using the “liquidation price”, which is defined as the price of the collateral in a Vault after adjusting for the liquidation ratio (i.e. the maximum price at which the position could be liquidated). In the case of an oracle attack that artificially crashes the ETH price very suddenly, the liquidation price of most Vaults will be well above the OSM price and thus can be used to set a more reasonable starting value. In reality it should probably just be one of several inputs to the logic that determines the starting price.

6 Likes

To me it seems that the dilemma on how to set initial price is really about what is a reasonable/ideal time for the auction to reach so called “market price”.
I would imagine the rate of the price change to be a main question.

Here are few thoughts on how to mitigate the initial price problem, but not sure how practical they are:

  1. Let governance vote decide on initial price which will always be at least x10-x100 from current market price (and vote again whenever price changes by x10). And in addition have a mechanism for speedy price decrease in the initial phases. E.g., price goes down quickly before X% of the tab was bid.
  2. When tab is big, first do an auction (like with current flipper) on 10% of the tab. Then use the outcome to set an initial price for a dutch auction on the remaining 90%.
  3. Run daily DAI=>GEM and GEM=>DAI (non dutch) auctions for 10k dai qty. and use the price x2 as initial price for that day. Assuming 10% loss in DAI=>GEM=>DAI conversion, this requires a subsidy (dai inflation?) of 1k dai per ilk per day.

All the above have attack vectors and disadvantages, and most importantly probably heavy to implement. But they could be refined to mitigate some of the pitfalls.

Finally a note on OSM price. It should be noted that OSM does not reflect DAI/USD price, hence taking x1.1 over OSM price is a bit borderline, as e.g., on black Thursday DAI was briefly traded for 1.1 USDC.
That said, from a system stability point of view, i guess 1 DAI = 1 USD is good enough.

2 Likes

One more idea about resistance against oracle attacks:

The risk here is mostly one-sided. We are worried about oracles being manipulated to show too low prices which triggers liquidations. Too high prices would lead to auctions not being started. Let’s additionally assume that the oracles were honest recently. This should hold, except well into a long-running attack which should have been reacted to otherwise.

So keep track of the recent maximum price and use that to set the initial price. Due to taking the maximum, it can’t be manipulated downwards.

This does not need tricky governance decisions (except perhaps deciding the period to take the maximum price from) or have costs to run (apart from normal oracle costs).

A multiplier can be applied on top for the same reasons other options have multipliers.

2 Likes

Doesn’t the OSM already take into account many different price feeds? It is good to be cautious, but are we being overly afraid here assuming the OSM cannot be trusted? Surely such a robust attack across multiple price feeds and exchanges would be incredibly difficult to pull off? If we believe this attack vector is feasible, perhaps we should look into restructuring the OSM.

I am also in favor of limiting the amount the price could change in a single update. But maybe even having the price change percent be lower than 50%. Black Thursday we lost about 50% over the course of a day if I remember correctly. Maybe even a max change of 10-20% for a single update would be good. In most cases when there is a dramatic swing like this there tends to be a rebound effect anyway, and so smoothing the curve will likely reduce losses for the protocol.

3 Likes

It does, but liquidations should not rely on OSM if they don’t have to. Liquidations not relying on it also makes such an attack less useful, because the payoff of pulling it off would be in getting to buy collateral below market price. So not using OSM here makes OSM safer.

This is definitely not anything we should play with. Smoothing like this can reduce losses often and catastrophically increase them in larger negative events. Exactly the opposite of what we want! Predictable small losses can be countered with fees, it is the unpredictable large losses that are the real problem which MakerDAO needs to try to minimize.

4 Likes

I understand that we don’t want to rely on the OSM if we don’t have to, but it seems like much of the difficulties around using this dutch auction style (which in most ways is a dramatic improvement over the previous auction style) is coming from our attempts to avoid trusting the OSM. I guess my point is more that maybe we can’t really avoid trusting the OSM, or at least the most robust way of accomplishing what we want to do is by trusting the OSM and then just doubling down on the security there.

This way we don’t have to choose lots of arbitrary values for where to start the auction, and make lots of weird assumptions that are likely to lead to different harder to predict problems.

4 Likes

I think it’s great to see discussion about the risks of the OSM, but in my opinion rather than zooming in on the auctions, I think it is a much better direction to discuss how to improve the OSM and make it more secure, and I already see a lot of great suggestions on the topic.

On that note people here might find it relevant to take a look the MIP1 problem spaces that relate to the OSM, such as Decentralized Oracle Freeze and Emergency Oracles.

Focusing specifically on trying to make auctions resistant to OSM failure misses the mark, because there are already other aspects of the protocol that critically rely on, and assume, that the OSM will always report true values.

Most importantly is triggering liquidations, and calculating emergency shutdown settlement. Both of these result in catastrophic failure if the OSM fails - if Maker liquidated all of its collateral simultaneously in a fire sale due to a failure of the OSM, it wouldn’t really matter if the auctions had a correct starting price, it still wouldn’t be much different from selling everything at 0, and obviously it’s an outcome that is completely unacceptable to users no matter what. So it has to be prevented from happening in the first place by governance.

Same goes for emergency shutdown, if the OSM failed, an emergency shutdown could be manipulated so that either all Dai holders, or all Vault holders would lose everything and the other side would gain twice as much in the settlement. Again an outcome that would be completely unacceptable if it was even remotely a possibility in the long run, and has to be prevented by governance for the Maker protocol to be viable.

So my point is, if we’re worried about failure of the OSM, I think the solution is to try to improve the OSM and make it more robust, not spend precious time on trying to come up with a perfect auction auction design that’s resistant to the edge case of OSM failure, while at the same time ignoring the other critical, unacceptable outcomes that would happen as a result of an OSM failure.

The smarter approach is to tackle the two issues separately to avoid getting stuck, and finish the auction design first without focusing on the OSM failure edge case. Because getting it done would then create space for the community to move on to new features that would address the OSM issue directly, and for all aspects of the system - not just auctions - such as the suggestions already posted here, or another example would be things such as the Decentralized Oracle Freeze and Emergency Oracles mentioned above (both are MIP1 problem spaces so you if you want to learn more about them you can check out MIP1).

A final, but important point to add, is that the auction system, like everything else, can always be upgraded later with more sophisticated and complicated logic. Given the amount of work that still needs to be done to prepare Maker for self-sustainability, I don’t think that now would be the right time to try to build a perfect solution that would never need to be revisited.

So even if it hypothetically is possible to create a solution that mitigates impact of OSM failure for auctions, and does so without compromising economic efficiency, there is still no reason to focus all energy on having that implemented from day 1. Instead it can be done in the future when other, more pressing issues are dealt with.

Thoughtful prioritization will be critical if this community is to be successful in holistically navigating all the risks that the protocol faces.

13 Likes

I generally agree with Rune’s point that we should focus on improving the OSM/oracles separately from implementing the new auctions, given that in a catastrophic “prices set to zero” OSM attack, the system is likely sustaining losses of the same order of magnitude whether the Dutch Auctions have good initial price discovery or just use the OSM value.

That said, it doesn’t mean we shouldn’t have any countermeasures in the new auctions themselves, because the risks of a Black Thursday-like market fluctuation are very real and quite similar to the risks of an OSM attack (i.e. a difference of degree, not kind). Of course the special features of Dutch Auctions are supposed to help lessen those risks.

I’ve been gaming different scenarios and mitigations out in my head most of today, and every mitigation seems to be a mixed bag–some are only good in certain situations but bad in others, and some introduce new vulnerabilities.

We’ll be doing economic simulations as part of the testing process for the new auctions; these can hopefully shed some light whether a simple initial price determination will suffice.

6 Likes

Hi! I made a separate thread on Batch auctions for liquidations. It also includes our (Gnosis’) experience with the DutchX, a DEX using the dutch auction model that we built in the past.

7 Likes

Am I missing something or do Dutch auctions imply a single flip phase (tend) and thus the liquidated CDP might not receive any collateral back (lack of dent)?

Not really. In a first phase, the amount of Dai to be raised could remain constant (equaling the debt to be covered + liquidation penalty), and the amount of collateral being given in return would increase until the point where all the collateral in the vault is up for sale. From there, the amount of Dai would be decreased (and the system would be making a loss once it has eaten through the liquidation penalty.) The price per unit collateral would in any case follow the price curve, for example a linear decline.

@wouter So just to make sure I understand.

You start the auction with X amount of shortfall (DAI) to cover (raise) where X = debtToCover + penalty. You also start with an amount of collateral to sell Y that is proportional to debtToCover.

You then constantly increase the amount to sell until in hits the total amount locked in the vault. After you hit the ceiling, you decrease the DAI amount to cover with the auction.

All of this is done smoothly using a DAI/COL price curve. DAI/COL may start at p% of the OSM price and from there everything is adjusted by the curve.

Did I miss anything?

EDIT: In this model you can’t really guarantee only a penalty percent loss for vault users because the DAI/COL offered to bidders (and thus the loss incurred by vaults) always changes according to the curve.

Folks a quick note regarding an some thoughts on the liquidation system that led to an idea.

A quick example that illustrates the idea:

Lets say OSM is at $200USD/ETH (just for easy calculation) and a new vault owner opens with 300ETH a vault for 20K DAI. So their CR on inception is 300%. All fine and dandy. Now OSM drops to $100 putting them at the liquidation point.

So now they come under liquidation gun here. The system as it is currently designed liquidates everything returning any collateral that comes back from phase 2 to the vault. But lets change a bit how this vault liquidates into a sequence of liquidations of some fixed amount of ETH. Say 50ETH at a time.

So this vault needs to recover its tab + .13 x tab = 1.13*20000=22600 DAI so if we were to offer this vaults 50ETH lot up for auction 50ETH would have to fetch 3333.33 (borrowed) + .13 * 3333.33 (433.33) profit to the system. Lets say phase 1 gets a bid that covers the full tab (3766.66) and phase 2 gets a bid of 40 ETH.

So what happens to the vault.

  1. DAI owed would drop from 20000 - 3333.33 = 16666.66 DAI owed.
  2. ETH backing the above DAI would become 250+10 = 260.
  3. The new vault liquidation price becomes 16666.66*1.6/260 = 96.15 and if the OSM never drops below 100 this vault only loses 40ETH and remains intact.

What the above scenario illustrates is that there may be circumstances where if liquidations are done sequentially on a vault that IF ANY collateral is returned from such an auction it will improve the vaults CR.

There are a few ideas one can take away from this:

  1. Liquidating a vault in a sequence of smaller liquidations could save the rest of collateral in a vault.
  2. If a vault is saved from further liquidations this may stop a sequence of cascading liquidations.
  3. DAI profit from the first liquidation could be used to also temporarily buffer this vault from further liquidations. Even if we used 1/2 of the 433.33 DAI profit this would drop the Liquidation point by 1% in this case.
  4. borrow facilities that have a higher LR and lower LF and hence have a higher chance at returning collateral in the above model of sequential liquidations would actually have a much better chance of not liquidating entire vaults.
  5. If these sequential liquidations happen over time it might buy enough time for a vault owner to either add more collateral, or add more DAI.

The issues with this in terms of code.

  1. Not as clean as liquidating the entire vault.
  2. Vault status when in a sequence of liquidations is more difficult to track.
  3. Liquidating sequentially as above may expose the system to more risk.

My point here is that it isn’t just fixing the method of auction here, but actually an adjustment to how a vaults collateral is liquidated actually could have a positive improvement on the system in terms of reducing the need to liquidate, or put better using a sequence of liquidations to try to recover collateral for vault holders so their entire vaults don’t need to be liquidated. In principle the system could also use the liquidation fee DAI (at least 1/2 of it) to act as an additional temporary collateralization buffer for the vault to buy time or collateralization to possibly stop additional collateral from being liquidated.

This becomes particularly true if market prices are recovering.

Anyway. I know doing anything to implement the above is probably a ‘big deal’ from a smart contract design but it was something I wanted to point out when I started looking at this from a how can we improve the liquidation system generally.

One last thing that came up here is the idea of having a flag operationally when auctions DO NOT return any collateral to a vault as this one kind of event shows that either the markets are really stressed pricewise or the auction system is not performing properly. It is a reason to throttle liquidations somewhat particularly if they are not recovering:

  1. Any vault collateral
  2. The 13% liquidation fee
  3. System losing money because auctions are not even fething the 100% borrow tab.
1 Like

If the auction doesn’t recover any collateral, it typically means the ETH price is falling very fast (faster than the auction duration). If you throttle liquidations for this reason, it just means MKR holder absorb bigger losses (assuming ETH price does not bounce back quickly). Vault holder need to understand there is no guarantee that they will receive collateral back in the even of a liquidation.

I see here that liquidation system 2.0 is in plan. But there is no link to the new system discussion. Is the system described at this post is the 2.0?

Yes, this one is the 2.0 discussion.