The time horizon you evaluate good/bad decisions is also important and tricky
Nice one, I thought flop auctions would be the answer. Ok, so we know the dependant variable (i.e. target outcome). Monitoring the regressors (independent variable) and their performance on our constant governance process should really be the goal. In a true experiment, i.e. hypothesis testing, we would isolate changes in regressor (risk parameter changes) to measure their impact on outcome.
I do not believe this is done now or whether it is even feasible in a micro scale under the current vote bundling framework where multiple inputs are changed at once.
Additionally I don’t think the process has been defined well enough to establish a control. But this is an opportunity in my opinion, we get to approach the issue from first principles and build the process moving forward with this outcome in mind. Would love to pick your brain at some point to see how you think about it in more detail.
Reinstating a scientific method for decision making. That would work yeah.
I mean we literally call it “scientific governance and risk”
Like I said, would love to work with you on this if you have the time!
Time is my (our) most limited resource. But I have by default an “open door policy”. So yeah.
Thanks for the thoughtful post @g_dip .
Could you provide a definition for censorship in the context of DAOs?
Could you provide one or two brief examples? Thanks
As mentioned, I equate the concept with sovereignty. A sovereign entity can do what it pleases without the concern that it can be stopped. As a general rule, the US and China are sovereign nations because (barring an unprecedented level of coordination) no other nation or current group of nations can actually stop them from doing it. In this sense, they are censorship resistant.
I spoke with Vitalik on skype in 2017 and got his help coming up with a definition of a DAO. I continually proposed definitions and then he would provide an example of something that fit the definition but was clearly not a DAO. Eventually I arrived at the below definition that he was unable to find issue with:
“An entity that exists in multiple redundant locations, has sovereign control over internal capital, and prohibits unintended modifications of its assets or mechanisms.”
To me the last part about unintended modifications is about having immutable terminal goals… essentially autonomous goals. I define the limits of autonomous thusly:
“A system is not autonomous if incentives preventing open ended changes to mechanisms or purpose are intentionally less than the plausible maximum. If a system intentionally has a weakness that enables agents to override the intended functioning of the system it is not autonomous”
In other words, if a system isnt designed to try in every way to prevent or make impossible for governance token holders to take actions that conflict with the mission statement / original intent of the system, it is not autonomous, and is therefore not a DAO. Lots of systems call themselves a DAO even when governance token holders are able to act against the mission statement or take all user funds even when there exists mechanisms that could prevent this they chose not to implement.