How Delegates Vote? Scientific Governance

Hello. Much discussion going on these days around Governance and the role of Recognised Delegates.

I want to give my 2 cents on the topic, bringing some ideas from the world of academia.

Governance Problem Today: Delegates need to vote on complex proposals (new CUs, budgets, collaterals, etc). Recognised delegates (even if paid a lot) have:

  1. limited time available, and
  2. limited knowledge on technical topics.

Both these constitute inevitable bottlenecks: proposal are constantly growing in number and technical complexity (see, e.g., the recent proposal by DECO).

As a consequence, we are starting to see Delegates having troubles coming up with an opinion and postponing/Abstaining their choices. This is inevitable, as observed by @MakerMan

and similarly, we are getting relatively poor reasons for their votes:

The same situation in Academia:

The above picture is nothing new. In academia, for example, people need to vote on: who deserves tenure this year? which research projects deserves a grant? which faculty should be enlarged/reduced? etc.

The above decisions are frequent, are large in numbers, and involve extremely high levels of technical understanding (e.g., even a respected Prof. in math, say, is unable to properly evaluate a project in applied physics).

The solution: Panels of Experts

what happens, at all levels, is the following:

  1. there are a number of people who need to make a choice - think of them as our Delegates.
  2. They form/contact a panel of technical experts of the subject. These should be individuals as free as possible from conflict of interests and well motivated to properly perform the evaluation.
  3. The experts give a judgment (e.g., a ranking among the projects that have been presented to be funded by a grant), and makes this publicly available.
  4. The decision makers, look at the the expert evaluation and make their final decision.

What happens in practice, is that very often the (1) decision makers accept (or very slightly modify) the decision of the Panel of experts (3).

Sometimes, however, the decision maker goes against the Expert choice: Example

  1. the chosen candidate, while excellent, does not have some other characteristic (e.g., we might want to prefer woman vs man to reduce the gender-gap, which is a purely political non-scientific decision).

In such a case the Decision Maker (1) needs to justify, again publicly, their choice and assume (and explain to their backers) the consequences of their decision against the Panel.

My suggestion for MakerDAO.

  1. We should stop (quickly!) asking delegates to evaluate proposals, or to do due diligence on proposals.
  2. We should provide delegates the instruments to hire/engage Technical experts.
  3. We should ask delegates to vote, after having a technical opinion from the experts, explaining their final choice (in accordance or vs the experts).
  4. The above steps need to be 100% transparent, so that everybody can see who has the final word (the delegates), who is in the panel of experts, what the experts think, and what the final decision is, and why.

This is a huge pain point for delegates. Thank you for bringing this up. Especially around legal and coding issues, this is sorely needed.


+1 on this. I think a red team core unit, or framework, for proposals would be crucial.

1 Like

MELA Core Unit: Monitoring, Evaluation, Learning, Advisory.

Brought to you soon™ by SES.*

Ideally, we have some smallish, independent, multidisciplinary team that can structure and source deals (not only upcoming but also current) and present an evaluation to the Delegates. This Core Unit would also help existing Core Units structure their reporting to ensure that the DAO has a clear picture of what is going on on different fronts.

(*) Soon™ is a bit disappointing (at least for myself). I know we need this yesterday, but we have been covering other bases, and finding the right people to start a Core Unit is not that simple.

Also, happy to hear other solutions on how we can be more transparent while simplifying things (Governance overload is a real thing).


I agree with this concept, although my vision is slightly different. I believe we should have voter comittees that each specialize in certain verticals (Which I call Maps) that already have defined norms and scope (which I’ll help bootstrap during the governance earthquake).

These voter committees consists both of delegates and at least one MKR Mafia. The MKR Mafia typically specialize in just one comittee, while the delegates sit on as many committees as they can handle as a full time or part time job (depending on how much MKR they have delegated and thus their level of compensation)

MKR Mafia = doxxed regular MKR holders with no compensation beyond sagittarius engine, sourcecred and other open frameworks, and no interest anywhere in the workforce. This uniquely makes them the best proxy for grassroots MKR holders.

The voter committees then have regular meetings (during earthquake probably monthly, over time perhaps quarterly) where they discuss topics related to their vertical and issue voting advice. They may also create new MIPs, and convene special meetings to deal with urgent mips related to their map.

As a part of these meetings they have the “power” to call in mandated actors, and other elements in the workforce to provide them with input, in particular they will make heavy use of meta-governance workforce such as the hypothetical MELA Core Unit described above, and generally are the main place where the meta-governance organization goes with their findings and advice.

Of course the power to call in experts and expert comittees isn’t absolute - it will be up to voters to enforce it in situations where it was clearly warranted, and up to decentralized workforce members to decide how to prioritize their time. This way a best practice should emerge organically.

But my basic expectation is that voter committees with large amounts of proven, declared MKR voting power from a diversified group of MKR Mafia must be higher priority because it is most likely to represent the truest will of grassroots MKR holders - and also because there is a greater chance they can actually fire someone who refuses to help them if they consider it damaging enough.


The “panel of experts” approach is working out well for rates/parameters.


This is the key point to me. Currently, new CU proposals are often posted with little to no feedback from what I would consider ‘technical’ experts.

In a traditional business, any new investment proposal would require approval from the finance team (who evaluates the proposal based on ROI, business need, etc.) and depending on the level of spend, approval from the function from which the proposal originated.

For example, a proposal for $250K R&D spend would require approval from the Finance Manager + Director of R&D, $1M R&D spend would require approval from Finance Director + Director of R&D + President of business.

How I envision this working with the DAO under the existing structure is that any CU proposal must be evaluated by representatives of existing core units that are within the same domain or function of the proposed CU. Finance should also provide an analysis of financial impact (straightforward for non-revenue generating CUs but more complex for those that can impact revenue or other strategic initiatives) and the domain expert can provide an analysis of why the proposed CU makes or does not make sense strategically.

Ideally, the more domain experts who can contribute an opinion and provide an overview on the proposal, the better informed delegates and MKR holders will be when it comes time to vote on it.


Here a solution: a recent application by CurioDAO proposes a collab among DAOs with the aim to complement asset sourcing and screening capabilities with a focus on real-world assets such as fine artwork and rare cars.

It feels quite challenging to expect a single community to have a technical expert network on all possible collaterals. The amount of work decided on screening and sourcing behind scenes is tremendous and I believe such collaborations could bring significant synergies.

Glad to get your feedback on this application. Any feedback can support us on improving this goal:

:grinning_face_with_smiling_eyes: Excellent. Looking forward to this one.

1 Like

I think the recent post here is also relevant to this issue.

It seems problematic that a vote went from a yes to a no in ten days. There were some reasons provided but as far as I can tell, the only relevant event that happened during these ten days was Liscon.

Also, this comment suggests that there is work to be done to prevent finger-in-the-wind voting.

1 Like

I’m curious. Why do you think that this is problematic?

Asking because we saw the opposite several times. Do you think that is problematic as well?

It seems like the information available to make an informed vote was not really there (cf the quoted comment).

And in this case, aside from Liscon, no new info came to light in those ten days as far as I know. I may be wrong about this if there were discussions in a call or some other place that I missed. Usually, in the other cases I’ve seen, there is an evolving discussion that makes people change their votes and that’s totally fine (and expected!). Hence, highlighting this example to see how we can do better.

1 Like