A Ratings-Based Model for Credit Events in MakerDAO

Starting a topic on the credit risk model we recently shared on placeholder.vc: https://www.placeholder.vc/blog/2019/7/10/risk-management-in-makerdao

Feel free to ask questions and share thoughts here


@alexhevans, congratulations on putting this out. It is extremely thorough and thought provoking for me. I am going to spend more time with this, but I was hoping you could answer some high-level questions to help frame some of the more general concepts…

  • You mention assumptions about loss severity in the paper, but does this model have an ability to estimate the “default severity” in addition to probability? Can the collateralization ratio be used for this purpose?

  • How are you determining transition rates? Are you looking to on-chain data (saw you mention Descipher in the paper) or is there an assumption that you are using for now?

  • Do you have any kind of public resource to run you model under different assumptions?

Thanks again, truly important work for Maker! And apologies if you answered these questions elsewhere, I have not been able to keep up as well over the last couple weeks with the Risk discussions as I normally would.


Thank you, Patrick! Some answers below:
*Collateralization levels for liquidated loans are included in the accompanying dataset on Github. Estimates of default severity can be implied from there. The procedure is not explicitly discussed in the paper
*This is discussed in Section 3
*Yes, the entire model and accompanying code is on github so you can tweak assumptions as needed :slight_smile:

I’m a little lost on the markov chain part.

From what I’ve looked up about Markov chains is that there is an assumption that the past state does not affect the future state of the system. I don’t understand how that’s a reasonable assumption. People are making the decisions behind all these processes (market behavior, cdp users, governance, ect) and people have memories that inform future behavior. Doesn’t the Markov chain assumption significantly change how we perceive change in the system?

Great question. This is a common assumption in such models and is often a reasonable approximation in practice. See the original JLT model: http://citeseerx.ist.psu.edu/viewdoc/download?doi=

That said, it’s an assumption that can be relaxed if needed, as mentioned in the paper. Same goes for the time homogeneity assumption. Though be prepared for the resulting model to be more complex.

1 Like

Sweet, good idea to ease into a model. Im sure we’ll go through all sorts of iterations. Yeah its complicated to fully model the risk of loans backed by assets that are constructed within the whims of human culture.

I liked Cyrus’s question about whether we had the ability to use subjective opinions from experts as data for this model. How could all qualitative data we gather accurately be transformed into mathematical construct?

Appreciate all the effort and fast answers.

Figured I would post a couple of things here for people that didn’t catch the July 18th Governance & Risk call.

  1. Here is the video of the call.
  2. Here is the discussion thread and summary of the call.
  3. Here is the section that summarizes Alex’s presentation about the Model below:

Alex Evans Discussing, “A Ratings-Based Model for Credit Events in MakerDAO”


  • As part of Placeholder’s involvement with the Maker network they put together a Risk Model.
  • Will be describing some of the thinking that went into the model and also explain some of the core mechanics and assumptions.

  • Is pretty obvious, we’ve covered it many times before. As Maker holders we recognize the importance of risk. We need to think systematically and rigorously about Risk and how to manage it.
  • We went for this approach because of 3 reasons:
    • It works well with smart contracts, since the world is defined in “states”. These models incorporate the idea of “state”.
    • Model work well with State-dependent payoffs.
    • These models are very adaptable and modular. You can play with it very easily.

  • The way this model thinks about Maker is it thinks about Maker as a collection of loans. The way we describe the global behavior of Maker is by looking at the emergent properties of that pool of loans.
  • First we define what a loan is: an individual draw transaction
  • Each loan can be in 1 of 4 states: Safe, Unsafe, Wiped, Bitten.
    • If the loan is wiped or bitten it cannot return to the safe/unsafe state again.
  • There is a map for how states can transition into each-other.
  • The amount of time a loan spends in a given state is an important data point.

  • If you have the 5 rates/assumptions shown in the matrix you can calculate every one of the formulas in this slide.
  • Good for both individual loan parameters, and Global parameters

  • Safe+unsafe is defined generally as “open” (reducing state types to 3: open, wiped, and bitten)
  • Here we estimate two transition rates and everything flows from those.
  • The covariant (c(t)) is for the Collateralization Ratio. It is a factor variable that is coded 0 if CR>280%, 1 if 250%<CR<280%, 2 if CR<250%. The assumption being that as the CR increases the rate at which loans are bitten also increases.
  • Similarly we do the same for the open->wipe rate. The assumption being that as the SF goes up, more people are likely to wipe their CDPs.
  • 95% confidence intervals

  • This is called scenario stress testing.
  • We’re looking for probability of losses against various Collateralization Ratio levels.
  • Defined as a bitten CDPs.

Questions & Comments for Alex

  • There is a typo in the unsafe->bitten state change (should be qub instead of qwb)
  • 17:47: Are you dealing with Safe->bitten at all?
    • It should be 0 in the matrix, but the reason I keep it there with a value qsb is because if you look at larger time intervals there may be some data that shows up here that shows a safe->bitten state change.
  • 26:19: Are you defining ‘loss’ as bite? or liquidation below 100%?
    • It’s defined as bite.
    • To get the actual “loss” you need to have an additional assumption about how many CDPs are bitten below the 100% level. It would be a distribution that tells us the recovery rate.
  • 27:06: Have you done any research on how one would go about calculating those transition probabilities and if so can you talk about the challenges associated with that?
    • 1: We shared a GitHub page with both the code to run for all of these.
    • 2: The data the Vishesh provided is a good place to start.
    • 3: You can try to project based on the distribution we get from current data, but it’s probably an exercise in conjecture due to lack of historic data.
    • Cyrus: All risk models right now are in a state where they lack long history of data to back-test by.
  • 31:02: Do you think transition probabilities would be similar across collateral types?
    • The rates depend on the price path of the collateral, and the borrower behavior associated with borrowing that collateral. Do you behave differently when you borrow against ETH versus a House?
  • 32:50: To what extent is the relative rate between different states determined by user behavior vs asset volatility?
    • Should we be thinking about relative transitions from one state vs another or should be using the overall aggregate amount of liquidation you would expect?
    • Aggregate amount of liquidation would be more standardized to human behavior across assets than the others.
  • 34:44: How can this be applied to MKR valuation?
    • There is a brief note at the end of the paper talking about this.
    • This model is actually very good at estimating value of derivatives with state dependent payoffs (which sounds like MKR)