Request to Core Units - Expanding the Collateral Onboarding Framework

Hello Core Units :slight_smile:

It seems as though those of you involved with collateral onboarding are using this framework to prioritize applications. I think I speak for everyone who has submitted a MIP6 application when I say that the objectivity and clarity is greatly appreciated. It takes a lot of guesswork out of making an application and creates a better experience for everyone.

In the framework there seem to be three categorizations for each column which contribute to an overall score: “low,” “medium,” and “high.” It would be really helpful if each Core Unit could write up a quick methodology of how they arrive at each of the three categorizations for their relevant columns in the sheet.

These are the following columns and the core units responsible for them:

Growth CU

  • PR Benefit
  • PR Risk


  • Dai Supply
  • Fees Generated
  • Diversification
  • Credit/Market Risk
  • Custodial Risk
  • Legal Risk (seems to be excluded)
  • Asset Setup Risk (seems to be excluded)
  • Difficulty


  • Risk
  • Difficulty


  • Risk
  • Difficulty


  • Risk
  • Difficulty

My ask is that there’s some sort of clarity around what, in general, causes you to provide a low, medium or high rating. I know you’re all very busy, but I’d like to stress how helpful it would be for potential and current applicants to have more clarify around these items. Thank you for your consideration.


Hi @g_dip,

The @Risk-Core-Unit has prepared a document where we outline the methodology and principles that we follow for the Collateral Onboarding Prioritization Framework. Please see the following document: Risk Core Unit - Collateral Onboarding Prioritization Framework - Google Docs


Thanks for the reply Sean, this is a great doc. I’m curious - how does this…

Note that the scoring system is based on five score alternatives for each metric: none, low, medium, high, and extreme.

… sync with the “low, medium, high” designations in the spreadsheet? Is there a formula you’re using?

1 Like

Here is a quick write-up on the current methodology for Risk section for RWA assets. It is only guidelines. The RWA section is no longer used. Neitehr Primoz or myself are aware of what Legal Risk and Asset Setup Risk are (which explain why it is not used, we didn’t add it).

DAI Supply: Using the 12 month estimation of the DAI that can be generated and the ability of the AO/legal structure to be trusted with so much

Fees generated: DAI Supply * SF => normalized to 3% (i.e. fees generated bucket are the same if SF = 3%, above if > 3%, lower if < 3%)

Diversification: Subjective and linked to the current portfolio (evolving), can also diversification in term of AO type (Fortune 500 Company, Asset manager instead of single asset originator)

Credit market Risk: should be broadly inverse of SF

Custodial risk: dependent on the legal structure in place
=> 6S Trust: High
=> Cayman Foundation: Medium
=> Centrifuge current: High, with indepent director fixed & true sale => medium, with collateral agent => low

=> 6S Trust: Low
=> Cayman Foundation: Low
=> Centrifuge: Low
=> Other needing smart contract other legal structure: High/Extreme


@Growth-Core-Unit uses the collateral framework spreadsheet to showcase the importance of a partner (the token issuer) for the growth of Maker. When performing the scoring assessment, we take into consideration:

  1. Partner’s present and future positions in Maker
  • If the partner is considering opening a vault using its token, if so, the size of its position
  • If the partner is considering opening other collateral vaults (extra points if they are thinking on wBTC or ETH)
  • The total size of their current positions in Maker
  1. The impact of the partner in the crypto ecosystem
  2. How big is the partner’s community
  3. The commitment of the partner to work alongside with us

That helps us to understand the PR Benefit of onboarding collaterals into the protocol.

The PR Risk is only used when onboarding collateral could represent a risk for the Maker community. Whenever we need to use this parameter, we will explain in the forum the reason for our assessment.


Thanks for the question Greg,

Although it looks like there are only three designations in the spreadsheet, there are in fact five. “None” and “extreme” just occur less often.

In terms of how the scoring is determined and calibrated, please see the relevant section in the doc:

As collateral types can differ considerably, it is important to note that the score given to a metric is not based on any particular, predefined, external criteria. Rather the scoring is undertaken on a case by case basis, using the judgment, interpretation, and experience of the members of the Risk Core Unit.

There is no predefined formula that we use. It is instead up to the judgment and experience of the team members in the Risk Core Unit, as well as the questions and considerations that we outline further down in the document, that serves as a basis for scoring individual metrics.

1 Like

Can I ask who is actually maintaining the Collateral Framework sheet? There are some errors that need correcting.

1 Like

It’s a joint effort, but GovAlpha has been managing the non-estimate updates. If you notice errors outside of the Core Unit estimates, please get in touch with me.

1 Like

What does this mean?

(Was answered directly with @g_dip, provided here for reference )

Custodial risk is the risk of the structure as a whole. Basically, it’s high when it seems okayish but without legal review on MakerDAO side. Then you move down the risk ladder as you implement what the legal review highlight.

Thanks @g_dip for getting this discussion going and for all the comments in the reply.

I’ll be working to incorporate the applicable feedback into the Collateral Engineering Services Product Plan.


The protocol engineering team reviews collateral by assessing a combination of technical criteria in order to determine overall risk and difficulty.

High, medium or low classifications for a particular collateral type are difficult to rigidly apply because a variation on any particular parameter could be considered a reg flag that increases the overall risk classification. In this sense, our assessments are very situational and similar to a typical smart contract audit.

For example, a standard ERC20 token with no additional functionality may potentially be classified as low, whereas if we identify a non-standard implementation, authorisation (for example a multisig on the token), it may be considered medium. If any additional code work would be required, if the code is not verified on Etherscan, or there is an external dependency on an oracle, then this could be considered higher or high risk. Note, this is merely an example to share the thinking involved.

As per our technical assessments, the following non exhaustive list and elaborations encompass criteria that we take into account to build an overall feel of the risk and difficulty:


  • Was the contract externally audited?
  • Was the contract internally audited and a self-audit provided?
  • Is the project’s technical documentation available?


  • Is the contract verified on Etherscan?
  • Does the contract have overflow checks?


  • What contract permits exist?
  • Are there external owners or keys?
  • What privileges do external owners have?


  • What code libraries are being used?
  • What other functions are allowed?
  • What other implementations exist?
  • What Token Standards are implemented?
  • Does the contract have an upgradability patterns?


  • Can existing token adapters be used?

Formal Verification Considerations

  • Are there any concerns regarding token semantics?
  • Are there any Oracle implications?
  • Are there any external calls or other risks (including complex inheritance structures)?

These criteria are not fixed or exhaustive and will be added to overtime depending on the token types being reviewed.