Possible data source for determining compensation

Long time Maker watcher, first time poster. Have been following recent discussions around Maker governance overhead and how to compensate workers.

Coming up with a fair, decentralized way to quantify value (and pay people), is of course a giant, unsolved problem, and necessary feature for a truly decentralized treasury. Recently, I’ve been contributing to an OSS project (SourceCred) that is building a reputation protocol. I’m wondering if it might be an interesting data source. The core of the protocol is a modified PageRank algorithm (what Google is based on), which itself was modeled after citations in academic publishing (a model @rune and others have suggested exploring). I’m not sure how it would fit into the Risk Governance Framework, but it does work on Discourse forums (a place where governance work takes place), so I’ve run it on the MakerDAO Discourse for fun. Curious to see if the scores make intuitive sense to regular posters here.

A hosted instance of above can be found here, if you want to play with the weights for different variables.

I’ve also run it on the ‘mcd-cdp-portal’ repo to see if it makes sense for scoring developers.

A hosted instance can be found here. For code, if you click the ‘legacy’ link in the upper left corner, you can drill down and see a breakdown of what contributions generated reputation points.

Full disclosure: SourceCred is paying contributors according to their cred scores, currently about $15k/week. If I link to this on the SourceCred Discord and people reply and/or :heart:, I may get paid some :money_with_wings: :grimacing: :man_shrugging:

Curious to hear any thoughts! Also curious if there are any potential issues that would prevent this from being applicable to something like Risk Team outputs, or doubts about reputation systems generally in this context.

7 Likes

Sounds like a great way to estimate efford well. Of course it do not answer question where money comes from, but sounds like great thing to try out and estimate contributions.

First of all thank you for partially including me in your chart, but I have serious objections to using any kind of algorithm to determine worker compensation.

If the algorithm is secret then people are less likely to trust the result, and if the algorithm is public then the system will at some point be gamed.

Please do remember that Maker workers will use their compensation to feed kids and pay mortgages so the majority of them will need to have a measure of income stability.

Alice: “Can we go on vacation this year, darling?”
Bob: “I have no idea, we must wait for the algorithm to decide.”

5 Likes

Sounds like a valid point, but keep in mind that if algorithm starts to be gamed it can be changed (and I would argue that it even should be simply part of governance)

I wonder what @s_ben thinks about it, but my understanding is that this algorithm is to reward “swarm” of little contributions. I expect that full time proffesionals like for example oracles, risk experts will be compensated by separate stable budget first set by Maker Foundation and finally by governance well in advance.

1 Like

Hey! I won the forum.

On a more serious note, Welcome to the forum @s_ben This is pretty interesting but I share many of @Planet_X’s concerns regarding using algorithms to calculate this sort of thing. Also, while it isn’t a data-driven point of view, I can give some anecdotal detail to how I spend my time doing MakerDAO things.

So far most of the work I’ve done has been on the following:

  • Google docs related to governance projects / MakerDAO projects
  • Organising and stake-holding projects on rocket chat.
  • Providing feedback to others on work that they’ve completed.

Especially recently, most of the work that I view as being high-value hasn’t occurred on the forums. Based on this I’d argue against any algorithm that was unable to capture work on google docs and project management via rocket chat.

Personally for DAO compensation I’d like a system based on:

  • A flat minimum hourly rate based on the living wage in a first world country (as calculated by a reputable group)
  • Additional compensation based on skills, experience and contribution.
  • No greater than a 10x difference in hourly rates between any two workers paid by the DAO. (10x might even be too much imo.)
5 Likes

I would imagine the money would come from the same places being discussed on other threads such as Signaling Request: Should Maker make a Treasury to manage revenue?. The simplest way would be for the Foundation to fund some kind of experimental trial. But the fact that Maker has already done the hard work of setting up on-chain payments for oracles makes me excited that it could do something similar for risk teams, paying via the vault/stability fee (or similar, still learning Maker). If working on governance becomes sustainable permissionless employment, similar to how keepers are paid, I think many more people would participate.

So the algorithm is public, and the intention is for it to always be. Gaming is the main potential weak point. There has been a lot of thought put into attack vectors, and how they can be mitigated. There are a couple ex-googlers working on the project, as well as a PhD social scientist that’s put a lot of time into this as well. Here’s a podcast where he goes into details about the algorithm, gaming, etc. if anyone wants to dig deeper.

So far, the project has not seen that issue. It’s been paying people for 16 weeks now according to cred scores, to dogfood it internally before recommending to others. Here are the payouts so far. The payouts started small, then ramped up to ~$15k/week in the last couple months. My personal opinion (not speaking for the project), is that in small groups of humans that know each other, gaming will not be an issue (or will but there are good mechanisms for dealing with it). Scale it out to thousands, millions, with lots of money at stake, and yes, you’ll see major sibly attacks, bots, spam, etc. But for something like Maker Risk Teams, presumably we’re talking about small groups of highly skilled, educated people that know each other, right?

This is something SourceCred the project is grappling with right now. It wants to start recruiting developers, for instance, that can build out a deployable product. But people will expect salaries, stability. There have been a few ideas floated. For instance, a Foundation could take on the volatility risk itself. Maybe it loans the contributor DAI in the beginning, taking on risk that the contributor bails without paying off. Then the contributor pays back the DAI until they’re self sufficient. Or, (and this is my case, as I make my living working for crypto in two DAOs (SourceCred and Decred)), contributors manage the volatility themselves. Keep savings knowing some months will be up, some will be down. It may be a hard pill to swallow, but this type of employment also offers a different type of security and antigragility that traditional employers cannot. Namely, there is no “tail risk” of being fired (income going to 0 abruptly).

There are also ways to introduce stability at the algorithmic level. For instance, currently SourceCred contributors are paid via a formula (which is modify-able) that pays according to a mix of last week’s contributions and “lifetime cred” (total value of all contributions). The lifetime cred is much slower moving and stable; and could be made even slower moving and stable if desired. One can think of contributors building up a “postition” in the project, similar to how traders build large positions. If the payments from that position are stable, the income becomes relatively stable.

In the future, when all of this goes on-chain and permissionless, and the system generates reliable enough revenue, we could see something similar to the Bitcoin mining industry. Where there’s a lot of volatility, but nonetheless, organizations creatively use their own capital and organizational skills to create their own stability. Bitcoin mining pools now generate sustainable revenue (enough to IPO on public markets even). I explore this in a fun thought piece SourceCred as Community-Store-of-Value, where I substitue PoW miners in Bitcoin with PoL (Proof-of-Labor) miners in decentralized projects.

But yes, an open problem (which I have currently :grimacing: :money_with_wings: ). Curious to hear any ideas around this.

5 Likes

Ben,

Welcome to the forums. I really liked you just popped something out that could be used as a basis for determining effort. How much to reward peole - something like an alogrithim is a cheap first pass approach that literally costs almost nothing as an estimate. And you have presented us with a pretty big piece of work and detail. Thank you.

The biggest issue with such a scoring system (reddit has been working on varying ways to implement DAO and community/posting rewards for effort - still has a long way to go really - needs to be used and a good look at whether it does represent effort) is spamming vs. real work and quality posts. One thing I see in reddit is people effectively spamming somewhat to get their claim to a distristubion of rewards. Will have to see whether the DAO itself will evolve this to something more appropiate. Since rewards also give users DAO voting tokens there has been some concern of a concentration of governance power into too few hands (sound familar).

I completely agree that with smaller groups who know each other well this model could work well. Scaling it to the size of reddit - a difficult challenge which I happen to be a part of the current reddit DAO experiments.

I think if since governance voting via MKR would be different than the contributions there isn’t the same issue here. As to payment criterion you have some really good ideas LFW. I’m pretty much agreed that if we set a bottom minimum payment rate (say $10/hr) I don’t think we should be compensating anyone much above $100/hr or 10x that rate or that would have to be ‘special’ compensation. Though as a consultant I have billed at over $1000/hr (legal case testimony between Apple and Sony as this required specialized hardware to be operational and literally 6 people - two lawyers, videographer, etc. at my place for the demonstation of technology) and billed for other minor consuting work at maybe 40-50 (website development) so I think the factor of 10 between the bottom and top is reasonable…

Thanks for taking the time to express these concerns @LongForWisdom. It’s valuable feedback!

So, this is a problem the project is grappling with currently. The algorithm works reasonably well at valuing work that shows up on public platforms (GitHub & Discourse for now, though one can write plugins to ingest any type of data). The project lead was just expressing frustration on the last community call actually that their emotional labor resolving a recent dispute (over a change in cred scores) was not showing up in the graph :sweat_smile: :thinking:

The base PageRank algorithm is robust enough to take other inputs. One solution would be to add new heuristics that are more expressive. I explore different heuristics (and trust issues generally) in this medium article:

But I would also argue (again my personal opinion, not the project’s per se), that Discourse in particular may reward this “invisible” labor indirectly. For instance, as you point out:

Is this not in part because people that appreciate your other efforts :heart: and reply to your posts more? Every time someone interacts with your posts, they flow some ‘cred’ generated from their posts to you.

2 Likes

Usernames like and reply to posts, not necessarily unique humans. There is no such thing as proof of unique identity on the internet.

True, the Sibyl problem is largely unsolved, for all platforms. However a) consistent work over time is a robust measure of “humanness” (e.g. i know you’re a human:), b) Sibly problems are typically not as prominent in smaller groups (e.g. below the Dunbar number (100-250 people) people have proven the ability to keep tabs on each other); the risk governance community is likely to be below this for a long time (or partitioned off into subcommunities of this size), c) if a bot does risk analysis that other humans find valuable, what difference does it make to MakerDAO’s mission?

2 Likes

Thanks for the thoughtful reply.

There’s truth to this. What excites me is that the cheap/free first pass not only get “in the neighborhood” for projects I’ve run it on, it serves as a starting point for discussion. Indeed, SourceCred (which is paying people to dogfood, successfully so far IMO) just had a call where we decided to do weekly reviews of contribution scores. The thing is, without a starting point/framework, you’ve got a blank slate, and the convo typically dies before it starts (in my experience).

A fair point. Reddit is a good example/cautionary tale. I’m watching the daonuts experiment closely (Reddit’s first foray into on-chain tokenization of karma on r/ethtrader and governance of elements of the subreddit (e.g. controlling the banner ad)). It seems to be going well so far, but curious to hear critiques from those with more experience (admittedly am not part of the community, just playing with the systems to learn). A founder of the daunuts project was working on a SourceCred integration prior to daonuts actually. Called the CredDAO, it’s a GitHub app that airdrops ERC-20 tokens according to cred scores in a GitHub repo. I’m hoping to run into them at EthDenver pick their brain :slight_smile:

Ultimately, I agree that scaling this to millions of people is likely to result in being overrun with bots/spam. Especially if enough money is involved. However, for many applications, such as Risk Teams, I imagine we’re talking about a fairly small group of people will be qualified/interested; presumably below the Dunbar number (100-250). If not, the community could be forked to “fork” into manageable sizes, like what churches do when their congregations grow too large. Smaller groups of people are fairly good at spotting and punishing bad behavior (e.g. how many people are registered on the MakerDAO Discourse? Is it not a great place, largely free of spam (I hope this doesn’t count :innocent: )? Do the initial SourceCred scores not get in the ballpark?). The system could also start out somewhat centralized, where an authority has the ability to change weights, setting spam to 0 (this is actually the way SourceCred operates currently while it explores avenues to further decentralization).

This is a tricky one. While I see the desire to have one’s time valued within a certain range, I’m not sure time is the best valuation metric. Perhaps it is one input among others, but the most promising permissionless/decentralized systems I’ve seen so far focus on a quantifiable work output (e.g. keepers are paid for liquidations, not time spent educating themselves or maintaining server infrastructure). This can be mitigated in a number of ways that “de-risk” taking the time to do any particular task. For instance, since I essentially get paid “royalties” on every contribution in the SourceCred graph (every GitHub PR/comment/:heart:/etc., Discourse topic/reply/etc.), my “royalty stream” is fairly constant, even if I don’t get paid much for a particular piece of work. E.g. I will likely write up this feedback I’m getting on the SourceCred discourse, and will get paid some amount for that (maybe only a low hourly wage worth, but not $0). There are also high-level variables to tweak that can make income more or less variable, as well as financial mechanisms from the traditional finance world that could be brought to bear. E.g. debt financing (a party pays you now for a claim on future rewards), etc.

1 Like

I go into this more at the end of my other reply to @MakerMan , but essentially yes. I haven’t looked up the numbers, but I likely get a “royalty stream” generated from several hundred individual contributions on the SourceCred GitHub and Discourse (the two platforms it integrates with currently). There are many ways the Foundation could create stability in creative ways. For instance, one could set a threshold one needs to reach to start getting paid. One could also pay strictly according to “lifetime” cred, which is relatively slow moving from week-to-week (the rate this varies would be an important parameter for a project to set (i.e. weight ‘old guard’ vs. ‘new guard’). In fact, if I stopped contributing to SourceCred, I would still get payments indefinitely (the way it’s set up currently, and assuming investors keep buying SourceCred’s token (Grain)), they would just fade to negligible amounts over time as more people contributed.

Yes. In my personal opinion, the “out-of-the-box” scores when running the algorithm on individual repos/forums gets “in the neighborhood”, and is useful immediately (assuming you already have moderation of spam via GitHub/Discourse moderation tools, which I assume Maker already has in place). But some kind of mechanism is needed to reach consensus if you want to change parameters, or try to quantify value “across” different repos/forums. While I have researched and written about Maker’s Risk Governance Framework in the past (~6 mo ago wrote this summary of Maker governance paid for by Decred stakeholders using their decentralized proposal system (funding proposal), I don’t have visibility into the day-to-day process and ‘artifacts’ generated by Risk Teams. Would be curious if there was something public I could check out.

@s_ben this is all really fascinating to read, a huge thank you for bringing this to the forum. This has definitely given me a lot to think about. The more I read the more I am coming around to the idea of using a data source like this as one input into a system for compensation. I’m not sure I would be comfortable removing humans entirely from the system based on what you’ve explained so far, but I’m seeing more value in this the more I think about it.

Interesting comments about the Dunbar number and the applicability to smaller communities over larger ones. I think you’re correct here. I can see a system like this working better on a smaller scale than a large one.

I’d be curious to know if you had any thoughts on how to ensure a fair and equitable wage distribution in a system based on results rather than time spent. I’m not necessarily against paying for results rather than time spent (indeed in many ways that makes more sense to me) but my priority in this area is to ensure we are paying employees in a fair an equitable manner.

I do worry that if we pay for results rather than time spent that it will lead to a proliferation of low-effort high-value work.



I just read back that sentence and I have no idea why I’d be worried about that, it actually sounds fantastic.

Perhaps some of my worry can be justified in that results-driven compensation does not necessarily capture work quality. As an example lets take a Risk Model. I don’t have much familiarity with that sort of work, I highly doubt that I can recognise a fully thought out, triple-checked and solid risk model versus a half-assed risk model in which the proper checks and considerations have not taken place. Under this system I think I would contribute more SourceCred to the creator for 3 half-assed risk models than a single high-quality model because I am not qualified to recognise the difference.

4 Likes

@Planet_X

I strongly agree with you about this point :

I think this situation result with every members here and tough decision is taken sometimes.

Part of the project could be done with an algorithm while another part would need human decision.

Totally agree. The original version of SourceCred (which was scrapped and rebuilt) focused too much on the algorithm. It has since pivoted to to being focused more on being a tool for communities to reach consensus on the value of contributions. It is “intersubjective”, blending objective metrics and human subjectivity. The founder talks a bit about this in this podcast if anyone is interested in the broader vision.

2 Likes

Really interesting. Looking in SourceCred more to understand better how this work and what controls/modification Maker could put on. SourceCred seems to operate primarily about work on things like code and githhub. Alot of the work around Maker to me seems harder to qualify or be verifiable, like bis dev or general communication between people/groups

There are these big questions/anxieties in Maker about who and how get paid for what in a DAO based world. I feel like this community doesn’t have a clear sense of what work is worth paying for.

In my head I think we need to figure out what groups of activities maker holders are willing to compensate. Starting with Risk, Oracle and integrations (technical) work seems like the way forward, but some governance or community roles could be looked at. So far I think most people who believe Maker needs a budget would want to start with compensating those first three groups.

Using an algorithm to determine payout saves alot of time and resources, but alot of the concerns mentioned above seem valid. We could maybe make a situation where governance allocates budgets to each “core” group which are to spent by the “facilitator” of each group. Instead of the facilitator just finding someone to contract or employ they could use a SourceCred type construction for compensating certain tasks that are better suited for automatizing. (maybe like comments, meetings, idk what else since I haven’t really dived into it ) while still offering some contracting for other tasks not as explicit. The “Facilitator” would be paid a salary through a contract with governance.

To do that I think first identifying potential “Facilitators” of each core and working with them to develop a plan for their “core” makes the most sense. Cyrus for example could facilitate the risk “core” and work with the community on how he could best use resources to develop financial risk framework tools and theory. With a deeper understanding of this algorithm it seems conceivable that he could automate a certain % of his budget. I dont see how the “community” can decide how to pay financial engineer types without input from the group itself. Rich could fund incentives for governance tools, communications and activities (basically what happens now just a budget allocated by the DAO to rich)

Honestly this whole topic is pretty overwhelming to me even though I initiated some of the conversation before. Hopefully we can figure out some incremental actionable step soon. Im personally working a lil project trying to roughly analysis the academic literature on organizational management architecture for the maker context. Maybe it’ll be useful maybe not

3 Likes

@s_ben I like this podcast.

The more and more , people will get into this and open up for open-source contribution , contributors, projects.

Great informations and this for sure could be introduced in here. Thumbs up @s_ben

Would be great to see SourceCred used as an input! Happy to help explore that possibility if the community decides to pursue it.

It definitely works best when there is a well-defined problem. I would think the formulaic nature of Risk Team work (going through the Risk Governance Framework to generate analysis, reports, etc.), combined with the academic researchy nature (which is what PageRank was originally based on) could be a good fit. Though I think it also can do a good job capturing more day-to-day granular work, like is done on this forum (the algorithm and weights could be tweaked from what I’ve posted to better fit Maker’s needs).

This is a problem SourceCred is grappling with itself. I’m happy with the way SourceCred is currently paying contributors (based on lifetime cred), because my “royalty stream” from past contributions has become fairly consistent over time. However, the project is looking to recruit devs and other key contributors, and they will likely have more traditional expectations around employment. A SourceCred Foundation is currently being formed, which can pay salaries or perform other functions if needed. Though the project is also exploring novel “employment” mechanisms where contributors are still paid via cred scores, but are able to “de-risk” the volatility. For instance, the project could front new contributors cred, which assures them a certain amount per month roughtly (currently Protocol Labs is buying Grain (SourceCred’s token) at a fixed exchange rate, so cred can reliably be sold for fiat). They then pay back the “cred loan” with contributions. Just one idea.

I will say, that having worked in several Bay Area startups, and having spent the last ~1.5 years working in (and for) crypto, I find that crypto, while volatile and challenging, presents a different type of security and antifragility. I’m not worried about getting fired and my income going to zero abruptly (which as happened to me). I think once people experience this “permissionless employment”, lightbulbs will go off, similar to the experience of sending Bitcoin for the first time. When I post a recap of feedback I have received from this (great!) thread, I am confident I will get a number of :heart:s, which will flow cred (and $) to me.

There are two main ways quality can be enforced:

  • Value contributions higher if other “high cred” contributors (who have gained reputation by producing high-quality work in the past) interact with those contributions. For instance, more cred flows to posts on Discourse if they are :heart:'ed or replied to. This effect can be amplified if needed by tweaking the algorithm.
  • Have a process whereby contributions are filtered. For instance, SourceCred is implementing a review culture/process, where a standard OSS GitHub process and norms are used. If a document is spam/low effort, per OSS norms, it will not get merged. And therefore get little (or 0) cred. This of course introduces a centralization factor with the maintainers. But perhaps a convenient/necessary one at this stage. Alternate governance mechanisms could be used in place of GitHub as well, which is also being discussed.

I think I may have expressed myself poorly in my last comment. I am less concerned about stability of income, and more concerned about paying a fair minimum wage.

Traditionally, remuneration for positions looks something like this:

Time spent x ( quality skills experience )

With SourceCred, this separation of time and quality is combined into a measure of ‘appreciated output’ (again I’m not convinced this isn’t better, but it’s definitely different). This means that we are not able to set a minimum hourly remuneration and cannot ensure that we are paying people fairly for their time.

I’d be curious if there is a way of weighting or tweaking the algorithm to separate these two variables? Can SourceCred produce a quantity value (number / length of posts / commits) and a quality value (likes, replies, etc) separately?

This might allow us to set ‘quality bands’ which we could use to determine hourly rate, and a ‘quantity’ value to estimate the number of hours worked. Do you have any thoughts on if this could work, or if it’s even a good idea?

I’m also curious as to how this works in terms of absolute versus relative values. Is it accurate to say the example models you’ve posted show relative contributions rather than absolute contributions? If I’m understanding your own system of royalties correctly, there is a pool of income which is split between all contributors? If instead of having a fixed amount to pool we want to pay on an absolute basis (and deal with any debt incurred) is that possible? I worry that the relative contribution model could produce unhealthy competition, my contributions don’t diminish those of others, but if I understand correctly if I contributed a large chunk of work that increased my royalties, it would reduce the royalties of others (because it’s based on a relative weighting).