My observations from reading through this thread and hearing rumblings around the DAO re the Content Production CU – people are upset because, as they believe, there have been little, if any, “impactful” results reported on the CU’s activities or accomplishments. For instance, there’s no blog with updates; there are no coordinated campaigns on Twitter (Content CU is not involved in Maker Growth’s tweets afaik); there are no conference focused campaigns that I know of; and there seems to be a black hole when it comes to what exactly it is this team does all day. Couple these impressions with the considerable funds allocated for the CU, and, accordingly, it’s easy to understand why some people may be angry, frustrated and ready for change.
This is correct.
Scientific Governance and Core Unit Management
Maker has always distinguished itself in its desire to manage through science or scientific governance. Science itself is the pursuit of understanding. In the natural sciences, science focuses on acquiring an understanding as the empiric. The empiric uses empiricism, which is built on the idea that knowledge is valid and can be acquired through observation, or direct sense experience that can be agreed upon mutually by numerous observers (consensus).
What this looks like in MakerDAO is an agreement to using Data or more “objective” measures to arrive at an understanding to support decisions and action. The interpersonal benefit of the empiric is that it helps us all align on common bases of information, potentially leading us all into agreement and more natural conclusions. The benefit to management is that it helps groups navigate, coordinate resources, and avoid conflict. Data and scientific governance also provides us a way to make sense of disagreements and in the event of disagreement, come to a mutual understanding of respective positions and the reasoning/logic. This helps create a polis or a common center for discourse. Of course, data does not always lead us perfectly to our conclusions, data itself is subject to interpretation or story by the scientist and their unique individual perspective. We will put that aside for now.
Scientific governance was a novel idea in crypto. Maker first applied it to peg management. The various community members would convene and evaluate where the peg is (under/over), what the end goal is (to get on peg), and then agree on chain to implement the various levers with an anticipated outcome ( supply or demand adjustments via SF Rates/ DSR/ etc) . If the outcome was not as desired or expected, another attempt could be made the following week to help bring the peg back to par. There was agreement and disagreement, but everyone would understand the logic and recommendations made by each party. The experience was fundamentally collaborative and all incentives were aligned (keep the peg!). This was scientific governance in the sense that, “knowledge is tentative and probabilistic, subject to continued revision and falsification.”
Again, the objective of science here was to use available information and reasonable predictions to evaluate an outcome. A hypothesis was made and either invalidated, or not. The benefit of this process is that it helps us understand what we expect, and if things do go as planned, to interpret the results and revise our hypothesis.
In the same way, KPIs are scientific governance, they allow us to negotiate what we expect ( a hyopthesis) and if things go as planned, okay, if not, they allow us to look at results, revise, draw inferences and conclusions, and then direct our next course of action.
A major issue is that these support core units have failed to establish clear mandates, objectives, and KPIs, hence evaluating their performance is near impossible. With an issue like content production, it is serious because the spend is high and the absence of a strategy and metrics, problematic.
KPIs and Core Unit Management
KPIs are data. They are “key performance indicators.” What does this mean? They are objective metrics, either quantitative or qualitative, that can be agreed upon and observed. A set of different observers can arrive at a consensus and agreement as to whether or not they were met.
For example, a KPI could be : Launch a Dai bridge to Starknet. This goal, can be defined, a milestone and date can be assigned, and we can agree if it happened as planned, or not.
By the same token: Have 10mm Dai use the stark bridge this year allows us to easily review the KPI. Did this happen, did we hit our mark? If not, why not? KPIs provide data, and data provides common ground for discussion and the evaluation of performance and scientific governance.
Said again a different way: KPIs allow for an agreement and understanding of performance and expectations. It allows for analysis as to the source of underperformance or failure, or in positive cases, allows management to evaluate overperformance.
KPIs are negotiated. There are two stakeholders, one performing the work (the core unit) and one requesting the work (community/DAO). The CU pushes to make sure they can deliver, the community is responsible for negotiating ambitious KPIs, pushing the teams to be more aggressive (while also creating risk they fail to deliver).
Once the KPIs are agreed upon, they can be evaluated. Some less ambitious ones may be met, some more ambitious KPIs may fail. This is part of the negotiation between the management and the team. Both sides are responsible for selecting reasonable and communicating expectations on each KPI to each other. After the performance cycle, scientific governance may then occur. A CU may be retained or dismissed, KPIs may be adjusted for next cycle, etc.
Content Production KPIs
While content production has been delivering work and various events, AMAs, etc. The work, in practice, is for naught, which adds to their frustration. The main reason is the absence of a strategy and KPIs. Without understanding what they are moving toward and how they will be measured, their work is unable to be sold as clearly progressing their program. A program which should be agreed upon and negotiated between CU and DAO. Now, the DAO is unable to ascertain their performance (or under-performance) because they did not set a strategy or KPIs and the DAO did not accept and agree with their proposed plan and benchmarks.
It would be appropriate to review and discuss what Content Production promised to do upon funding and what KPIs they established at their Core Unit inception:
The primary focus of our first three months will be to [sic]
During this period, team members will continue lending support to projects like:
• Maker Relay (Written 4 and Audio 4)
• The Community Portal
• The Community Blog (coming soon)
Month 1: Setup and Coordinate
• Incorporate and set up administrative operations (payroll, etc.)
• Implement a Request For Content (RFC) process
• Work with the Legal Core Unit to determine legal limitations on content and needs with regard to IP (copyright/trademark/TOS).
• Work with other Core Units to surface content-related needs and coordinate the management of MakerDAO’s digital properties, including, but not limited to:
o Maker Blog
Month 2: Strategize, Execute, and Renew
• Develop a content strategy based on organizational goals, define KPIs to measure the effectiveness of our efforts, and determine staffing needs
• Hire content creators and onboard them with Opolis
• Start producing and distributing content
• Report on our progress submit a budget proposal to continue our work
Month 3: Produce, Report, and Renew
• Provide feedback to RFC on the budget proposal
• Continue producing content and reporting results
I have not done an in depth review on each line item. To be fair, few of the support core units have clear strategy and KPIs. However, content production explicitly promised to produce KPIs and a strategy as a central deliverable. Further funding was to be contingent on this strategy and the costs were supposed to be tied to the explicit delivery of this strategy and its milestones. This was a reasonable path forward amidst uncertainty. Unfortunately, it does not look like negotiation and acceptance of a strategy and KPIs occurred. Both the community and content production forged ahead with funding despite lacking an understanding and agreement as to what was to be delivered.
Ideally, the concerns raised above regarding KPIs would have been addressed by the end of month two as proposed (June 2021) and agreed upon at the renewal of funding (July 2021). Though no KPIs were produced and agreed upon, the budget spend was approved and significantly increased through May 2022.
It is now January 2022, and we find a recurring large budget spend (~140,000.00/month with contingency) with no strategy and no way of measuring the success or utility of the spend. The core unit has been operational since May 2021, with approximately 8 months live. From a management perspective, operating a group with no strategy nor KPIs is problematic, regardless of the whether content production is retained or dismissed.
At the same time, the community must hold itself somewhat responsible for failing to press on content prod for KPIs and a clear strategy prior to renewing the funding. The best course of action seems to fund content production through February, with the expectation they deliver on all the deliverables originally promised at “Month 3” at time of funding (Originally June 2021, Now February 2022). This would provide them two months of funding, as was done on inception, to deliver on work products previously promised, most importantly:
• Develop a content strategy based on organizational goals, define KPIs to measure the effectiveness of our efforts, and determine staffing needs.
At that time, a further budget proposal could be approved for March and April to support executing on the strategy until May 2022. This proposal would include, objectives and KPIs agreed upon by both the DAO and the community. This would allow the community and Content Production to measure their performance.
First, it would allow for DAO to determine if content production performed through February and be confident funding its next proposal. Is there a clear and coherent strategy with clearly measurable KPIs for Content Production?
Second, It would also allow the DAO to determine performance for the following quarter (Q2 2022) using the agreed upon KPIs in the strategy.
This format would allow for the DAO to approve a new budget proposal in Feb 2021 for additional funding to cover Q2 2022. Under such a structure, it would be clear whether this is a good fit, and both sides could squarely carry their own responsibility. At the same time, the DAO gets valuable experience negotiating KPIs with core units and holding them accountable for their performance, regardless of a positive or negative outcome.
By doing the above, we could re-align incentives and avoid any future conflict as there are no surprises or emotional reactions, and everything is instead data driven as the empiric.
By Feb 2022, content production either has the capabilities to produce a strategy and clear KPIs, or not.
If they are able to produce the KPIs and strategy, they can make their case for funding to execute on them in Q2 2022. If not, they were well paid the entire time for their work, no hard feelings and everyone has been treated fairly with no surprises.
If content production is funded through Q2 2022, prior to approval of a Q3 2022 budget there would be appropriate measures to evaluate performance, which remedies the current unfortunate situation.
At the same time, during this process, Content Production may realize Maker is not a good fit for them or their skillset and capabilities, or maybe that the increased accountability in the DAO doesn’t work for them. They could always opt to restructure their core unit, join another CU, or even decide Maker is not the best place for them and find work elsewhere.
The goal is to be fair, charitable, and responsible. Conflict is to be avoided, as is waste. Transparency and accountability are vital.