Gauntlet's Auction Assessment

At Gauntlet, we use agent-based simulations to evaluate the economic security of blockchain protocols. We have worked with the Maker team in parameter optimization for Liquidations 2.0 - an upgrade from the current Liquidations 1.2 system. The goal of the simulation was to optimize the Liquidations 2.0 parameters, which form the Dutch Auction pricing curve. We then compare the optimized Liquidations 2.0 system performance to the current Liquidations 1.2 system. Key metrics tested are auction throughput, risk, and slippage. Additionally, a key area of focus is in ensuring another Black Thursday insolvency situation does not occur again. Therefore we test these systems across a wide range of volatilities and simulate agent behavior over a 3 day period.

The Liquidations 2.0 analysis found the optimal parameter values:

buf = 110%
cusp <= 60%
tail >= 5200 seconds
step = 30 seconds
half life = 7200 seconds
cut = 99.7%

Additionally, the analysis shows that Liquidations 2.0 outperformed Liquidations 1.2 in every unambiguous metric of throughput and insolvency risk, especially during periods as volatile as Black Thursday.

We hope that this report can serve as a reference for the ongoing discussions about risk on Maker and the upcoming Liquidations 2.0 release.


Wow, this is exceptionally useful information. Would love to see more of Gauntlet in DeFi.


Thanks Gauntlet team. We made a proposal for LINK-A here and also referred to some of your findings.

Your proposal doesn’t include other key parameters that are needed in Liq 2.0, which are ilk.hole, hole, chip, tip and ilk.chop. Some of these (particularly ilk.hole) are related to other “core” parameters that you suggested.

We believe the buf that you proposed might be too low. As much as I can see your price curve makes expected auction duration of around 17 minutes and this might be too fast. Also, in some cases and especially when using it for volatile assets such as LINK, 1.10 buf could also lead to auction starting price below market price. But the real concern is that when using this curve on Black Thursday, auctions could settle in some occasions in first few minutes and we may want to avoid that as auction throughput would be simply too high. Especially if we again end up in scenario where network congestion leads to late and below market price bids.

We also think the curve could be a bit less smoother than the one suggested here. If people potentially bid through UIs in these auctions, price change per 30 seconds is probably too fast. I am aware of recommending you to use a smoother rather than discrete line when we had discussions initially, but maybe it shouldn’t be too smooth for reasons mentioned.


We hope this serves as a helpful reference for the community. We very much are not trying to model everything and tried to prioritize the parameters that had the largest effect on key outcomes. We provide suggestions to be helpful, but the goal of the report is to allow people to see the tradeoffs here. It’s up to the community to weigh those tradeoffs to do what’s best for Maker.


Did you take into consideration that the next auction kick off during the first auction process?

I am also quite surprised that the step is around 2 blocks time. It seems quite small for me.

Definitely - you can flip through the individual simulation runs here: Gauntlet - MakerDAO Auction Report

Scrolling down a bit, on the right-hand side you can see this:


As you change the sliders above, you can see how under different conditions (volatilities, etc.) the number of simultaneous auctions varies over time, as well as a slew of other metrics