Wanted to take a minute to address people’s concerns around the model input parameters and the magnitude of the risk. All of magnitude numbers are regressed from the original standard model runs so I’ll try and tackle peoples concerns around it potentially being too conservative first. Basically here are my thoughts.
This number doesn’t seem that off to me honestly. Probably the number I take least issue with. For instance ETH-A auction 134 recently sold for 440.25 dai/eth. Compare that to the OSM price at the time (468 dai/eth) and the 10% discount the model assumes and you’ll notice the percentages there aren’t far off. Granted I’m cherry picking an example here and do i think the average discount is a bit lower than this, but i don’t think this is an unreasonable estimate.
Looking at the daily candles we have seen so far i don’t particularly take issue with the severity of the jumps used in the model. The probabilities on the other hand might be off. I would have loved to see some more experimentation with those input variables. For context, the model has a few high severity jumps that it tests to determine average loss and VaR. Looks like for the 175% LR @ 7MM DC we used 15% drops, 30% drop, and 60% drop. Well we have already seen a single day drop of 15% for YFI on Oct 29. The 30% drop is yet unprecedented but we have seen multiple 24%+ gains for YFI so assuming that there are equal magnitude losses ahead does not seem that odd to me. 60% seems feasible but idk that i buy there is a ~28% chance of that happening in the next year, so admittedly i would say the model is being a bit conservative there.
The model that the risk team used tries to take this into account and project losses. The number they use here is potentially a bit questionable though. I believe that should be accounted for by the sigma parameter in the sheet i linked above. That is probably the standard deviation that they used to generate the random walk of YFI’s price. My issue is that i can’t make any sense of the value 2.2 that they used. Is that dollars change? Is it percent change? Would love more context here.
I guess what i want people to take away from this is that although there are some questionable inputs to our original risk estimate most of the inputs in there seem within the realm of possibility, and as such is probably the best framework we have for understanding how much extra risk the protocol is taking on with this debt ceiling increase.
Ok so getting to magnitude here. I put together a little linear regression model to make a bit more sense of the numbers that our risk model spit out and potentially project what our VaR is going to be.
There are only 4 data points though so the f significance (22%) here is lower than you would typically hope for, but i think it is useful to gain a bit of insight on what we are talking about in terms of magnitude of our VaR and what we would need to set the LR to in order to make the VaR work with the current stability buffer ceiling.
Based on that regression am seeing i think my original estimate of a VaR of $5MM might be a bit of a low ball estimate. From my regression model i think we may be talking about more like $8MM VaR @ $20MM DC. Just solving for LR i think we need to change the LR to be > 299%, so that is probably not doable bc it would liquidate peoples vaults. Based on that, I think what i would recommend is increasing the LR about 2500 bps (would need to look into the vaults more to see how feasible that would be) AND increase the stability buffer by around $3M.
If you look at what @Primoz was saying here about heuristics concerning stability buffer and volatile debt he states that ideally we would have at least 1% of the volatile debt in the stability buffer. $7M would put us pretty close to that at 1.3%, and @ a 200% LR it would be enough to cover what I’m projecting the VaR to be.
Here is a link to my numbers in case anyone wants to review: https://docs.google.com/spreadsheets/d/1zIG04XgY05djH3B9vV6MKwmXaDXsZVv6q2F5BxnsPPw/edit?usp=sharing