MIP40c3-SP56: Modify Core Unit Budget, RISK-001

MIP40c3-SP56: Modify Core Unit Budget, RISK-001

Preamble

MIP40c3-SP#: 56
Author(s): Primož Kordež (@Primoz)
Contributors:
Tags: core-unit, cu-risk-001, budget
Status: RFC
Date Applied: 2021-01-12
Date Ratified: 2021-02-XX

Sentence Summary

MIP40c3-SP56 modifies the DAI budget for Core Unit RISK-001: Risk, replacing MIP40c3-SP13.

Paragraph Summary

MIP40c3-SP56 modifies the DAI budget for Core Unit RISK-001: Risk. The proposed monthly budget for the Risk Core Unit is 230,000 DAI per month. The document contains the following: (i) why we are proposing a modified budget, (ii) past work review, (iii) a breakdown of the proposed budget, and (iv) comparing the previous budget to the proposed budget modifications.

Specification

Motivation

The proposed budget will be used for the following expenses: (i) compensation for the contributors of the Risk Core Unit, (ii) necessary tooling costs and subscriptions, (iii) operational costs, (iv) contingency funds, and (v) future grant payments.

MIP40c3-SP56 represents a 26% increase (182,000 DAI per month to 230,000 DAI per month) from the previous MIP40c3-SP13. The Risk Core Unit currently consists of 8 full-time contributors and one grantee. We are planning on increasing our workforce by an additional 3 contributors in the short to medium term.

The Risk Core Unit has been financing Makerburn through a grant program since June 2021. The Makerburn website has become a must-have tool for analyzing MakerDAO performance and other on-chain related events. The Risk Core Unit and Makerburn have collaborated on several tasks in the past, and there are a lot of synergies between the Maker Risk Dashboard and the Makerburn website.

We are now joining forces on a team level with Makerburn, who has been a valuable community member for over two years. We believe that Makerburn’s dedication to MakerDAO should be rewarded by offering him a similar package to what other core MakerDAO contributors currently enjoy. We will also make sure he is incentivised through the Risk Core Unit’s MKR budget.

At present, the Risk Core Unit is composed of two subdivisions, (i) a Development Team, and (ii) a Risk Analyst Team. With Makerburn onboard, the Development Team consists of 4 full-time equivalent Developers and 1 full-time equivalent Data Scientist. This team is maintaining the MakerDAO Risk Dashboard, and from now on, also the Makerburn website. The other part of the Risk Core Unit, the Risk Analyst Team, consists of 3 full-time equivalent Analysts and myself. The Analyst Team covers evaluations and all parameter-related community discussions and proposals. We will soon be looking for one quantitative analyst (job offer coming soon) and one more full-time equivalent Analyst. The Risk Core Unit therefore plans to have 11 full-time equivalent contributors in the short to medium term.

Core Unit ID

RISK-001

Work Review

Our work review for the past few months can be accessed here:

We also document relevant topics and discussions that the Risk Core Unit participates in through our Risk Core Unit Forum Archive.

List of Budget Implementations

This budget implementation uses a DssVest Budget Implementation.

  • Payment: based on predetermined monthly budget
  • Asset type: DAI
  • Address: 0xb386Bc4e8bAE87c3F67ae94Da36F385C100a370a
  • Payment Frequency: Accrued per block linearly through DssVest
  • A total of 2,760,000 DAI will be streamed to 0xb386Bc4e8bAE87c3F67ae94Da36F385C100a370a starting 2022-03-01 and ending 2023-02-29

List of Budget Breakdowns

Team

Tooling

  • AWS
  • Infura
  • Subscriptions to off-chain data APIs (Cryptocompare, Nansen, etc…)
  • Sentry
  • Github
  • Domains

Operations

  • Company administration
  • Flights, conferences, etc.
  • Legal costs

Grants

The grant program with Makerburn will be terminated as Makerburn will be joining the Risk Core Unit. Going forward, we will be looking to provide grants to potential new hires or for various specific one-time tasks.

Contingencies

We will continue to keep a contingency buffer of 20,000 DAI, as previously specified in MIP40c3-SP13. We have found this to be very useful when growing the team and covering unexpected costs.

Funding Process

The team uses a company to distribute funds and signs contracts with its contractors (full-time or part-time workers). This provides contractors additional legal security and also enables the team to make payments in fiat (for infrastructure, administrative and operational costs).

Budget Breakdown

Proposed monthly budget for the Risk Core Unit is 230,000 DAI.

Breakdown:

  • Team Costs 76% / 175,000 DAI (previously 127,000 DAI)
  • Tooling 3% / 7,500 DAI (previously 5,000 DAI)
  • Operations & Legal 6% / 12,500 DAI (previously 5,000 DAI)
  • Grants 6% / 15,000 DAI (previously 25,000 DAI)
  • Contingencies 9% / 20,000 DAI (no change)
7 Likes

I am a bit confused here regarding the need for individual CU data analytics?

I agree that makerburn contributions are valuable but I don’t see how his work adds value to RISK specifically where as the majority of the work applies to data analytics generally?

I really want to stop this CU specific Data Analytics expenditures and encourage more use of Data Analytics CU for analytical services and UI analytics development rather than have every CU have its own Data Analytics and UIs. If we are going to go this route then I am going to have a hard time justifying a separate Data analytics CU.

Finally I really want to see color directly when a CU is asking for large budget increases (26% within what 6 months is a significant change). I see Maker onboarding less collateral types (we are not growing vaults dramatically) but trying to move in the direction of RWF.

Exactly what is the justification for increasing head count here from 8-11 and where is this staff being allocated?

1 Like

Hello there Primoz! Would it be possible for you or one of your analysts/data scientists to list out what your performance expectations are of each person from the Risk CU?

And how will you determine if those expectations have been met: what are the outcomes you will track for each individual?

As an example, let’s suggest 4 categories within which you can nest “KPIs” depending on your Core Unit model and strategy objectives per team member. Using a basic balance scorecard:

  1. Developing Key Risk Indicators
  • Thorough understanding of each potential risk exposure.
  • Documenting each risk, the impact, and likelihood of the risk occurring.
  1. Risk Assessment Delivery (quality, timeliness, cost/efficiency)
  • Identifying any risk exposure relating to current or emerging risk trends.
  • Assessing and quantifying each risk and its potential impact.
  • Providing perspective through benchmarking.
  1. Communication (Community engagement and satisfaction, turnover, innovation)
  • Enabling other Maker CU teams and key personnel to receive alerts of potential risks in advance.
  • Providing time to develop the appropriate and effective risk responses.
  1. Financial Frameworks
  • Monitoring CU expenditures and cashflow.

If we are requesting too much, I totally get it–but please provide input with regards to the first question–that would be totally appreciated. Thanks in advanced!

2 Likes

Excited to see Makerburn officially join the DAO. Compensation on an equal footing will hopefully help to retain this invaluable talent.

Can you clarify how the IP will be handled? Will the new code that is developed from here on out be open-sourced and copyright transferred to the Dai Foundation? What happens to the existing code base? What is the current license on the existing code base?

My second question is wrt product ownership. Has it been discussed how this may shift? If I understand correctly, @makerburn has so far had the role of makerburn.com product owner (with good results.) Will this remain the case?

My last question: am I correct in assuming that the activities of the RISK CU are evolving more towards software development? If that’s the case, has the team considered when and how it will introduce industry standard software development practices?

I’m talking about management, QA, and security practices such as agile/scrum, branching and versioning policies, release management, continuous integration and delivery, OWASP principles, etc. Practices that become more important when a software development team scales up and the need for systematic coordination increases.

6 Likes

I think this is a good question to ask, and it would definitely be useful to start a broader discussion around individual performance evaluation.

One thing to consider (as you called out) is that it’s a lot to ask of each individual facilitator to define their own framework and document it to such extent that there’s transparency towards the DAO. It is a lot of work and I can imagine that the reality is that performance evaluation happens in a more informal manner in the core units.

Within SES it’s definitely the case still while at the Maker Foundation I have implemented a more formal evaluation framework.

I think this is a burden that should probably be shared across the DAO: to start a discussion about best practices when it comes to individual performance evaluation and what is typically expected. Facilitators most likely could need some support with this.

One of the questions would be what the most effective way is in a DAO to do performance evaluation. My current thinking is that we should probably try to measure performance on a CU level (what gets delivered), and when it comes to internal CU management (how it gets produced): focus on recommended best practices but give individual facilitators ample room to learn and grow.

The same applies more and more to compensation and job levels too. A standardized framework would be very useful. This framework should be seen as supporting the facilitators and indicate best practices in the first place. Only as a secondary step it can then be used to evaluate performance: once there are clear expectations and there is good opportunity to learn about and adopt best practices, then it becomes definitely reasonable to start asking if/when this will be looked into.

Also, since we’re a DAO, it would be expected that facilitators maintain a high degree of autonomy when it comes to these decisions. Recommendations are probably better than MIPs that enforce a certain framework, for example. Of course, when recommendations aren’t followed, it would be expected that there’s an explanation why that choice was made.

Another point is that an important distinction should be made here between core competencies vs secondary management practices.

That’s also how I try to approach the software development best practices. From a risk core unit it would be expected that they’re best in class when it comes to the risk profession. When it becomes a software development cu, top notch software development practices are required. These are core competencies.

People, finance, legal, marcomms, etc are secondary competencies that may require support in the first place and more critical evaluation as a second step.

7 Likes

I believe asking these questions are starting in the wrong end of the spectrum. As long as we’ve chosen a DAO as our organization form it doesn’t make sense to me to start to micromanage inside what should naturally be the domain of the CU facilitator. To gain benefits from being decentralized we need to put a certain amount of trust that facilitators will actually manage their teams to the best of their ability.

Also tracking the performance of about 20 CUs is already a lot of oversight work, if you multiply that up with every single contributor I fail to see how that is not going to lead to oversight overload.

I agree with @wouter that as a learning tool or as a way to establish best practice it makes sense to discuss and share methodology and tools for team management but this is a thread about a CU budget.

I agree with this quote too and think our energy is better spent in making good KPIs for the CUs themselves and leave it to the facilitators to achieve them. If a CU doesn’t achieve their objectives the facilitator will be the one accountable. The team would be accountable to the facilitator, which have the powers to adjust their team as they see fit within the budget they are given.

8 Likes

For sure. This was my shameless attempt at trying to figure-out if KPIs in a DAO/Core Unit are even possible? Trying to get the fire started. As an example, I wrote on this post and ask if KPIs in DeFi are even possible: KPI goals for DeFi (Scroll down after my other shameless attempt at airdropping Breaker tokens :blush: )

2 Likes

Consolidating all of the thread questions together:

Headcount

Why are you adding 3 headcount? What are their roles and responsibilities?
Why did we not have a quant on the team prior?
How did the CU function without Data analytics in the past?
Why is that needed now?

Risk KPIs

What is your vision for Risk on the analysis side?
What is the target number of risk analyses performed per quarter?
How many were performed last quarter?
Were you unable to deliver on the expected number of collateral risk analyses?
Is there any plan to move toward decentralization of risk teams?

You will be adding one risk analyst, how will this expand your capabilities in performing analyses?
How is this headcount increase, one more analyst, reflected in your KPIs and expectations?

MakerBurn

What is the vision for Makerburn? Does it reside as a third party resource?
If so, how will IP be handled?
Will the new code that is developed from here onward be open-sourced with the copyright transferred to the Dai Foundation?
What happens to the existing code base?
What is the current license on the existing code base?
Who will own and manage the Makerburn product?
Has the team considered when and how it will introduce industry standard software development practices?

6 Likes

Thanks everyone for your feedback. I’ll try to address as many questions as possible.

The Maker Risk Dashboard and Makerburn are in our view seen more as tools to justify risk parameter proposals that are at the core of the Risk Core Unit activities. Although one could argue these websites are just another set of data, we think it’s of great importance for us to keep developing metrics on which we rely when making proposals to governance, which is in our mandate.

To be clear, most of the data inputs that we use to develop metrics is still consumed by data provided by Data Insights CU. Again, our charts have more to do with developing metrics and risk models that are relevant to governance decisions related to collaterals, module implementations, auctions, etc.

As much as it seems our work is focused only on collateral evaluations, we cover much more. We recently started adding growth metrics, developed metrics to measure activity of vaults to help us calculate aggregate portfolio capital at risk, made automated simulations on auction throughputs, started discussions about new potential collateral types (LPs and NFTs), set risk parameters for institutional vaults and developed new risk metrics for D3M. I probably forgot to include a lot of other work that’s been done recently.

Also don’t forget that if D3M becomes an important tool to scale crypto collateral exposure, we have much more work to do in order to measure risk properly - we need to both analyze risks of secondary lenders as well as the assets it holds as collateral. Same goes for institutional vaults - suddenly one of our main focuses becomes monitoring their vault management techniques on an individual level, and we’ve started developing tools for that too. And then finally, we might have double the amount of work when MCD moves to L2 - pretty much duplicate work with additional complexities.

There is a breakdown in the post. Three additional contributors are Makerburn, another regular analyst and quantitative analyst. I’ll explain below why we believe another analyst and quantitative analyst are needed.

I think @Wouter and @JustinCase put it well, ideally we’d want to have a framework for this on the DAO level. I can say though that since our team expanded in the last year, I personally have a lot more work with coordinating the team in addition to evaluating and motivating each individual, so this kind of discussion is important. This can be very complex, especially since MakerDAO moves in unpredicted directions when it comes to new implementations, which in turn forces our team to unpredictably shift their focus and priorities as well.

Thanks for your suggested categories, I think most of them make sense. @Josolnik is currently working on a document that addresses some of the concerns you listed. The other part should in my opinion be addressed and unified on the DAO level. As @Justincase suggested I’d also prefer if a certain degree of autonomy is put on a facilitator to evaluate contributors, as otherwise it might become too time consuming for the DAO to make these kinds of evaluations for each individual.

Our and @Makerburn goal was always to go open source on both dashboards, but it simply wasn’t a priority until now. I think now that we can claim both websites are becoming important tools for our community, we should devote more time on going open source. We also don’t see issues with transferring copyrights to Dai Foundation in order to have a much more robust setting on the DAO level.

Product ownership and existing code base will stay as is. Merging with @Makerburn is done only on a team coordination level. As said, there are a lot of synergies between us when developing metrics needed for governance decisions and this is why we proposed this merger. Alternatively Makerburn could have proposed his own Core Unit, but we believed us being under the same CU makes most sense since we already worked like that until now, the only difference was that he was under a grant and thus not on equal footing with other core DAO contributors for his work delivered.

I wouldn’t necessarily say our main activities are evolving towards software development, I only think that our governance proposals (which is the core of our mandate and we see us sort of a risk consultant) can be best achieved if we have proper metrics automated and displayed so that every DAO contributor can have a better understanding of our proposals. The evolution of metrics added on both Maker Risk Dashboard and Makerburn is very much in line with what we saw as relevant to community discussions or topics addressed.

As to industry standards we use, please see below:

  • For managing work/tasks we are using scrum. We have 2 week sprints and we are using Trello for tracking all our tasks.
  • For code management we are using Github and a simplified version of (Git flow)[Gitflow Workflow | Atlassian Git Tutorial]. Every code goes through PR and code reviews, when code is approved it is merged to “development” branch which is deployed to the staging environment, where we do QA. Once the QA is met, we then merge all changes to “production” branch and deploy that to production environment
  • For CI/CD we are using Github actions, for every code change we run all unit tests, and code style checks, so code style stays unified. When all these checks pass we are then able to merge it and automatically deploy to the selected environment.
  • For security practices, we are updating packages/libraries to up-to-date versions, we are using the latest versions of frameworks which should already protect us from most security issues. We are also looking for the OWASP top 10 issues constantly during development and code reviews.
  • For monitoring alerts we are using (Sentry)[https://sentry.io/], and we wrote (sentry-to-discord)[GitHub - blockanalitica/sentry-to-discord] webhook, so we are notified in discord (and email) as soon as we get an error.

As to @MakerMan question, we plan to add additional 3 members: Makerburn, regular analyst and quantitative analyst. Makerburn continues to maintain and improve metrics added to Makerburn website, analyst will be mostly needed once we start proposing the full set of risk parameters for L2, quantitative analyst helps us with more advanced simulations (i.e. on auction parameters and gas cost impacts).

We have been looking for quantitative analyst for quite some time actually. The idea is to get one PhD-like member that can help us with most complex simulations. The recent auction simulations for tip and chip parameters made by such potential researchers is a talent we are after. Although their research confirmed most of our proposed parameters, I still like having someone on the team that can produce such high quality content and to continue to test our proposed parameters in a more formal type of research.

As said, we are consuming most of the on-chain data from Data Insights Core Unit and we don’t intend to change this. Our dashboards have more to do with developing metrics relevant for governance decisions and risk monitoring. And you can observe this by constantly adding new metrics on the go depending on what becomes a priority at Maker.

Our vision is to use the Maker Risk Dashboard and Makerburn as a tool for Computer-Aided Governance to continue improving decision-making by incorporating evidence from data and simulation modeling. We’ll soon publish a post describing this in more depth. More specifically, our aim is to propose risk minimized governance parameters by relying on professionally developed risk metrics. At the same time we have a long term vision of developing a “risk platform” where risk professionals can join us and specialize in different risk areas. Instead of having duplicate Risk CUs for more robust DAO setting my idea was always to have independent teams within Risk CU that have both budget and milestones autonomy on different risk fields, such as auctions, surplus buffer, vault parameters, etc.

I think our past evaluations were mostly “in time” and other CUs don’t usually wait for us in order to get collateral onboarded. Sorry if I don’t list all the work in here, it should be all aggregated in monthly reviews. And it is true that in the last few months, the DAO has been offboarding more collateral than onboarding it. Judging from this you could claim why a sudden need for a bigger Risk Core Unit. But as explained in the beginning there is much more than a regular ERC20 collateral onboarding evaluations our team does. We have been equally if not more busy than during the time of more aggressive collateral onboarding. I also expect Maker to grow in integrations with other protocols (D3M as an example, potentially NFTs and various LPs onboarding) and this creates compounding risks for MakerDAO and more time consuming risk monitoring.

I did my best to address this in the vision reply above.

Another risk analyst is in my view a backup that will most likely be crucial down the line once we move MCD to L2 or when we unexpectedly start doing implementations such as adding 1) new D3Ms 2) institutional vaults and 3) more exotic collaterals (LPs, NFTS?) where ongoing monitoring becomes much more important than the initial evaluation. In my 3 year experience at Maker, it isn’t easy to find the right talent for this role and we want to always be prepared.

Backup for more work expected (addressed above), a faster response time on evaluations and topic discussions, more robust monitoring for D3Ms, other upcoming collaterals and MCD L2 parameter proposals.

The idea is to merge only on a team level - coordinate on adding metrics relevant to governance and use the same source of Risk Core Unit income for his payment on equal footing.

Tried to answer as much as possible in the reply above to @Wouter. Hope this helps.

9 Likes

Thanks @Primoz for the very thorough reply. Great to see that the team is thinking about software development best practices and applying them. Also happy to see that @makerburn will continue to take up the role as product owner.

We’ll be connecting to clarify the open-source and licensing aspects of the discussion.

5 Likes

Thank you, this helps a lot.

1 Like