- Happy Thursday, everybody. Today we will be going over the weekly MIPs update, as we usually do on this call. My name is David Utrobin. I am a community development lead at the Foundation and resident MIP editor in training.
- I created a little slide for visualization because visualizations often help people place themselves and understand things a little bit better. We are here. Today is January 21st. The governance poll just ended, and we’re coming up toward the end of week three in our January governance cycle. Next week, on Monday, we’re going to have a bundled executive vote.
- A note on MIP42: It is a bit early since several things are in the air, such as the Keg or the governance contract redesign. There are many dependencies there, but the idea is excellent to formally include in RFC.
- I would like to touch on the difference between what conception and RFC are for MIPs. Whenever a MIP is initially posted, it’s in conception. After MIP editors reach out to the author to fix basic formatting and make sure nothing’s going too crazy there, it gets moved to RFC. Also, with being moved to RFC, a pull request is submitted to the MIPs repository.
- Chris: 6S probably won’t make it in on Friday. However, we’re likely to get it to Kovan on Friday or next Monday.
- David: Great, thanks for the update.
- David: Likewise, if anybody watching or listening would like any given collateral status and see the complete list and everything, the collateral status index excel sheet is actively updated by several people involved in collateral onboarding at MakerDAO. That sheet has everything; It links to all the collateral applications, and assessments that have been done and assign every collateral application’s current status. It’s a great resource.
Other Presentations and Updates
- Many of you have been around for a year or so perhaps remember when we had a demonstration on the general risk model we are still using to calculate risk premiums. I think we had one presentation in April last year. Then I had one in October on risk premiums on ETH vault. Andy has been kind enough to improve the model; He added correlation metrics, which measure Value at Risk on a portfolio level. This is an essential metric because Maker’s risk exposure has been growing exponentially within the past few weeks. It has added many new collaterals in the last months. We have finally increased the current surplus buffer. This kind of presentation should give us some insight into how well-protected Maker truly is. The presentation should understand this risk profile and begin thinking about risk mitigation decisions because debt exposure can continue to grow. We may want to act sooner rather than later. I’ll let Andy present his work, and then we’ll have some discussion at the end of his presentation.
- My name is Andy. You’ve probably seen me in the forums. I’ve been around for a long time. I’ve been helping the risk team for the past month or two to refine the old collateral risk model and run simulations for them.
- I won’t read this full disclaimer here, but this is a holdover from previous presentations that the risk team has done. More or less, this says this isn’t financial advice, and you should consult your own advisers on those matters. I’ll post this slide after this is done.
- I’ll give you a brief nod to the risk team’s previous work and some improvements made to the module as part of the project. That’s the first half. We’ll look at simulation results during the second half.
- We need to talk about Value at Risk (VaR). You may hear me use this term. There is a nice little definition that comes from Wikipedia; VaR estimates how much a set of investments may lose given a period. You have a bell-curved type shape, and this red line represents the VAR at 5% on this particular bell curve. The area shaded in red is 5% of the total area underneath this curve, which is 8.2, but that’s not important. This is to give you a sense of what we’re talking about here.
- The nominal risk at Maker is growing. Today the max theoretical debt ceiling is approximately 2.2 billion. You’ll see in this call some historic comments concerning specific media figures and making statements such as “there’s this much DAI at risk in the Maker portfolio.” This VaR metric helps make statements that I’m half quoting here; it says, “reasonably confident that Maker losses will not exceed x depending on your confidence in the given year.” You can potentially use that sort of information to help inform your stability buffer decisions, rates decisions, and more. It gives you an easy-to-consume format to talk about how risk is growing within the portfolio. Hence, as you raise the rates, you can point to this metric and say, look where this number is going up.
- Here’s a link to the collateral risk model. Here’s the source code. I highly recommend these two previous presentations.
- Last year, the risk team built a module that runs this Monte Carlo simulation. It helps estimate things such as expected loss to inform stability fees and evaluate risk, predicting the worst-case type losses that this vault type can generate.
- That model allows you to simulate a hypothetical vault type, but it has some issues. These issues revolve around the practicality of its use. There’s a bunch of inputs you need to come up with to actually run a simulation, and a lot of these inputs, especially a few ones, in particular, are quite complicated to derive. Over the past month, we’ve written some software to quickly throw this out for a given collateral type. However, you need to account for it, not telling you the evaluated risk across the whole portfolio. You can’t merely sum that up. Hence, the model needs to be improved a bit further. We can run these simulations and determine what the portfolio’s overall evaluated risk is.
- Slippage is one of those inputs that are difficult to evaluate. In Layman’s terms, slippage is what we stand to lose at auction once somebody defaults in the position. Basically, it’s a function of keepers’ profit expectations plus the price impact of liquidating things. I have a link here on the YFI slippage analysis, which is also where I got this graph. I highly recommend somebody going in there and looking at it. There are five or six pages worth of random math that the risk team put together to create these four lines. I’m dumbfounded every time I look at it. I wrote a program that can do the calculation performed in the spreadsheet here.
- Using this, we could determine the slippage for each collateral type. Having those slippage values allows us to look at how vaults are being collateralized. We can simulate almost every vault type and develop an evaluated risk number for each one, but that’s not enough to know the portfolio. We need to understand how the losses will be correlated with each other. What we assume now is that they will be associated similarly with the price movements. As I’ve noted here, this is probably not a perfect assumption. Still, it is interesting in terms of risk mitigation because what it says is that the losses will be highly correlated.
- Here’s some matrix math for you. There will be some links in the notes on how you can derive this yourself. Using that correlation metrics and the vector of your vault type losses, you can do this matrix multiplication to determine the evaluated risk of the entire portfolio, which we’ll go over in just a minute.
- With all that said, here are some figures.
- This is an output of this model’s slippage calculations. The only takeaway I would point out on this slide is that there have been many calls in the forum to raise the ETH debt ceiling. According to our model, you can see that we’re now getting into the range where the slippage for ETH is relatively high. This is why you see pushback coming from the risk of continuing to raise the debt ceiling there.
- The evaluated risk numbers are essential. They are based on the past 90 days of trade data. They are currently being favored because there’s been a bull run happening during that period. However, a few vault types are not included in these numbers, which I have written here in gray. These are any stablecoins, the Uniswap Liquidity tokens, wBTC, and PSM. The reason for this is basically because there’s some debate on how to calculate slippage. I thought it best to leave them out and ignore them. These numbers are sufficiently high where I don’t feel it’s necessary to talk about them too much.
- Here are a few arbitrary confidence thresholds I picked. For every number you see here, n, for example, 1-n, the model predicts the probability of single-day losses exceeding. This is the number in the right column. The model predicts a 20% chance of exceeding 86.4 million in the next 1 year period of time.
- I want to point out which is relevant to current affairs on the forum is ETH-A versus ETH-B. These two are the highest amount of exposure from that set of vaults that were simulated. They count for almost all of the evaluated risk. You’ll notice in these forum threads that various risk team members are commenting, “I don’t feel comfortable raising these things.” You can see how ETH-B contributes more proportional to its exposure size. It’s probably not that surprising, but you get more bang for your buck in raising your risk exposure every time you increase the ETH-B debt ceiling compared to raising the ETH-A debt ceiling.
- I’ve linked the website here to where you can find this quote. This quote says that there’s a term called VAR breach that states we calculated a 20% percent chance that the loss will exceed x. It doesn’t state how large that loss will be if it exceeds x. Let’s say saw the model calculate a 1% chance of exceeding 1 million. Also, let’s say there’s a half a 0.5% chance that the loss maybe 100 billion. There’s nothing wrong with the statement of a 1% percent chance that loss will exceed 100 million. Be a little bit careful with those numbers. They’re often misinterpreted, and they don’t speak to the nominal size of any potential VAR breach. It only speaks on the likelihood that they would occur.
- I’m ready for questions, but here are some potential starters on what you may do with this information, as well as some further extensions we could make to the model. I’ll just take a break right here and see if anyone wants to chime in and ask questions.
- Chris: Yeah, with those VAR values, is that a 20% chance that we’d lose 80 million over a year, or are we thinking of a single shock event?
- Andy: It would be a single-day loss.
- Chris: Jesus.
- Andy: Last year, during March, a single day event created 8 million worth of bad debt. It’s similar to that thing.
- Chris: Yeah, that’s very different from my intuition, so thanks for illuminating that.
- Seb: Does your model take into account a
box of 15 million?
- Andy: That is a good question. It does not consider the
box. It is totally okay with liquidating infinite amounts of money in a given day.
- Seb: Okay. Would you say the situation could worsen if the
box is 15 million and can liquidate on the 15 million pair within six hours?
- Primoz: With a
box of 15 million, it really depends on price trajectory. It usually protects from worse events like zero bids. If we adjust it for the
box, we will get more optimistic numbers. On the other hand, the model is not taking into account people unwinding before getting liquidated. We can’t really model for every little thing due to human behavior. On average, the slippage curve, which is the most essential input, is still realistic. This is where the losses come at the end.
- Gregory Di Prisco: Did you say that you’re using daily VAR?
- Andy: It is the representation of the worst day. If you go back, these numbers are adjusted to predict every year. This doesn’t say every five days, you should see a loss of 86 million. It’s more like every 5 years. You should see something along those lines.
- Gregory Di Prisco: Okay, for example, oracles update once per hour. Theoretically, we have hourly exposure instead of daily exposure. Is it accounting for this?
- Primoz: No, it’s simplifying it. What the model is doing is calculating losses on each day using thousands of periods. There’s a one year period, and then this crash occurs, and then you have random block thousands of simulations. Therefore, you get this distribution of losses. The average of those losses is the expected loss, which is the risk premium. We use this when you set rates and want to look at the worst-case events. This is how banks do it as well. They calculate evaluated risk because they want to be hedged against worse case events. This is the number that we simulated. This is the one percentile of the worse losses that we simulated.
- Gregory Di Prisco: Since we’re using daily, isn’t it safe to say we’re being pretty conservation considering our liquidation exposure is hourly?
- Primoz: Yes, and no. Even if it’s hourly in one day, you can have many liquidation events every hour. There’s also a
box, and not every auction gets cleared. There’s slippage daily for the whole portfolio, but I get your point. We could make an alternative simulation that would focus on a shorter time interval. It can simulate the price trajectory during the day and micro estimate that loss, which can then be extrapolated into a yearly fee. It’s doable. Still, it’s not how the model currently works.
- Gregory Di Prisco: I always worry about an OSM attack where you can see a vault go under 100% collateralization. I do that for the rule of thumb for the worse hourly candle in March and overlay that into the vault and assume that will happen.
- Primoz: I just want to jump in here. I wanted to have this discussion today to think about how we could catch Maker and whether there are tools available that I’ve discussed. There’s a surplus buffer we may need to think about increasing even further. We also need to agree that it takes time to get to a ten million figure. It could take two or three months to get to ten million from now. This may already be too late. The second thing is the
box parameter we had discussed. It’s difficult to justify a higher number, but I think it could be very beneficial. Liquidations 2.0 would improve the slippage functions; The number would be much lower. This is why liquidations 2.0 is currently a priority from the development perspective. Regarding higher liquidation ratio vaults, We will need to choose between business growth and risk management. As I said, those figures are based on the current collateralization ratio distribution, which is very healthy. The collateralization ratio can quickly change if present vaults believe that the current Ethereum price is stable and are planning to mint more DAI. This will push the ratios down there towards 250. Suppose we get to some equilibrium state that we observed in the past. In that case, we may have 50% of the portfolio lowly collateralized, and then those numbers become even worse.
- LongForWisdom: Yep, I want to thank Andy for that presentation. That was great. This is something where people need some time to absorb and come back to within another meeting. Are we going to post that presentation anywhere else?
- Andy: I will post it to the forum sometime after this.
January MIP Governance Poll Review
- LongForWisdom: That sounds good. Alright, let me cover the thing I forgot earlier. Prose and David pointed out that I forgot to confirm that MIPs will continue to the executive votes next week. On Monday, we will have the MIPs executive vote to confirm. Six proposals will confirm and mark them as ratified once passed. Looks out for that on Monday. As usual, I will leave the call open if anyone wants to discuss more with Primoz or Andy.
Links from Chat
Common Abbreviated Terms
MCD: The Multi-Collateral Dai system
DC: Debt Ceiling
SF: Stability Fee
DSR: Dai Savings Rate
MIP: Maker Improvement Proposal
OSM: Oracle Security Module
LR: Liquidation Ratio
RWA: Real-World Asset
PSM: Peg Stability Module
VaR: Value at Risk
- Anna Alexa K Produced this summary.
- Artem Gordon produced this summary.
- David Utrobin produced this summary.
- Denis Mitchell produced this summary.
- Jose Ferrari produced this summary.
- Everyone who spoke and presented on the call, listed in the headers.