Hi guys, it’s been a bit since my last post and I wanted to give an update on my findings.
…So after calculating the expected return on debt (rD) number from before and finding that it seemed to approximate what we have been seeing in the executive votes i decided to refine my model a bit to hopefully reduce the difference between our calculated sf and the calculated actuals.
To do this I decided to not model each collateral type as one large cdp and instead find the weighted average rD for each individual CDP. You can find the source code of the program i made to do this here: https://github.com/tamccall/dai-vaults.
Looking at the CDPs individually did make a small difference for my rD calculation. Using the same CAPM parameters from before. I found that the rD_ETH to be ~14.7% and the rD_BAT to be ~9.04%.
Although, You can probably take those numbers with a grain of salt though as they are very dependent on how you determine the market risk premium for eth and bat respectively. As I said earlier between the time periods of Jun 1, 2017 - Jan 26 2020 ETH had an annualized return of about -10% that said if you look at ETH over the period of 2016 - 2020 you’ll come out with very different numbers, and of course all of these numbers are based on past returns which are not necessarily correlated with the future. A second criticism might be the usefulness of rD itself. Should a better metric be used to help inform stability fee votes? I’m unsure.
Ultimately having gone through this exercise I have come away with 2 main takeaways.
- Maker should probably consider making the underlying debt portfolio easier to access
- The way we vote on spread seems a bit backward to me.
First I’ll tackle what is probably the first of the most controversial of those two assertions. If you notice in both my models above to calculate an estimate for SF one of the first things that we need to know is the “risk free rate” (rf). Now, if you accept my premise that the rf in the crypto market is == DSR then it seems odd to me that we first decide on SF and then subtract a few basis points from there to determine DSR. Given a rD number in the model above along with the input parameters sans rf (K,V, etc). I would have a hard time arriving at rf that was used for calculation. It is much easier to calculate the former (rD) from the latter (rf). Likewise, it seems to me that maker voters should probably be setting stability fees based on the DSR and not the other way around.
Maybe I’ll put together a signaling request around the later at some point. That said, I may come back to this topic at some point in the future, but I think I’ve spent enough time trying to figure this out for now!