Week 2 DAI Distributions - SourceCred trial

The second week of SourceCred trial payouts are here!

Below you can find this week’s DAI payouts, Cred scores, and the top 10 contributions of the week by Cred created.

For full scores and payout data, see this observable notebook. A hosted instance of SourceCred running on the Maker forum can be found here. Further details, including how scores and payouts are calculated, can be found in Maker SourceCred Trial.


This week $1,250 worth of DAI was distributed based on Cred scores. Below are the payout amounts for the top 20 contributors.


At the end of the month, contributors that opt in to receiving payments and have at least $10 of DAI in accrued payouts can redeem their balance for DAI on-chan. The balance available for redemption will be equal to the sum of weekly payouts posted to the forum. To opt in, please send a direct message to @sourcecred-trial-adm indicating your desire to participate in the trial and providing an Ethereum address to send payments to.

Cred scores

Below is a breakdown of Cred earned this week along with lifetime Cred scores for the top contributors.


Top Contributions

Below are the top 10 posts created last week, ranked by Cred generated.

If you have any questions or concerns about the scores, data or presentation thereof, please don’t hesitate to ask!



I like the addition of the top contributions as well as the Cred scores. It is going to be interesting to see how this plays out. Weird question perhaps. I know as it stands now no way to do what I ask but I was wondering how one could exclude people who can’t be included in the DAI payouts.

I have no problem with foundation folks getting paid in DAI because the whole funding for this is coming from the foundation. But as an example question - could you eliminate say Rich, and LongForWisdom and whomever else needed to be out and still pay out the 1250DAI/week to the remaining? The point being is that one still wants important people to get cred but the people who couldn’t get grain (or in this case DAI) to get any and still make it all work?


Interesting that owocki and peter_jones had 5.8% and 5.9% of weekly cred.

So we discussed this formula at one point. If I recall correctly, we decided not to use it for a couple reasons:

  1. This could introduce significant variability into the payouts. For instance, for the first week, only four people I believe had opted in by the time the first payments went out. So they would have received hundreds of dollars a piece, for potentially minor contributions. We’re trying to be as scientific as possible with the experiment, so this variability could introduce “noise” and make it more difficult to determine if the incentives are creating the desired outcomes. I should point out the Foundation is distributing “at least $5k worth of DAI each month”. While we’re starting with $5k worth of DAI a month, if we’re not seeing overt gaming or other adverse effects, the Foundation could raise this if it wanted to see how that changes incentives. This more conservative approach is what SourceCred did internally, systematically raising Grain payouts until it reached the “sweet spot” where contributors started contributing more regularly.
  2. By calculating amounts for Foundation employees as if they were in the trial, it should give them a sense of what they would be getting paid in this new scheme.

@s_ben thanks for the quick reply.

I thought with respect to the DAI payments (not sure how this would work with grain) that payments would accumulate so that one could still hold the payment for a user until 1 month after the trial period ended. So basically the payout would still be spread across a lot of people just those that signed up. (Am I wrong on this?) Now whether one eliminates people from the (Grain or in this case DAI payout) I can see how running this with everyone seeing their totals is interesting from a trail point of view. I could even see these payments as going back to the foundation folks and the foundation basically using these payments to offset a salary to track their contributions against their expectations. I mean if say @LongForWisdom performed above and beyond maybe use the DAI from their sourceCRED as a partial or full bonus to their normal pay. etc.

Yeah makes sense from a trial perspective and even in the context of perhaps some extra compensation above and beyond even if it is partial (say for every $200DAI LFW earns he gets the full $200DAI but $100 less on his salary for that week giving him a $100DAI bonus). I kind of like that conceptually. It could apply to everyone on foundation salary in some sense having part of their salary come from participation and contributions here as a kind of mini bonus since it would encourage them to come too. But that is entirely up to foundation folks. I think in the end I like the idea of having everyone in this trial so we all can see what everyone is doing ‘on equal footing’ from a CRED point of view and even from the ‘grain or payouts perspective’ just to get a kind of idea of how this would work.

In the end if this works out really well I could see governance funding it. But then again we still have to hurdle MIP14 probably with a few other MIPs and a heck of a lot of discussion before anyone is paid by governance vs. the foundation at this point.

Sounds a little complicated, at least for the time being.

If the trial goes well and MKR Holders want to continue it in its current form, then we can look into expanding the scope in the future.

My wording isn’t the clearest here :confused: You’re right, in that payments do still accumulate for users that haven’t opted in, up until 1 month after the trial. We could, for instance, after the opt in deadline (Sept 1st) has passed, take the DAI that would have gone to the excluded people and distribute it to those that opted in, instead of returning it to the Foundation. SourceCred can in theory implement any of these schemes. However, it could introduce complexity, and change the costs the Maker Foundation has planned for. Will defer to @LongForWisdom and @rich.brown on that one, but my recommendation would be to keep the current plan for now and just raise the total amount distributed (knowing some of that won’t be distributed due to excluded parties) should we want to increase payouts.

Also, if Maker decides to continue using SourceCred beyond the trial, with the intention of exploring paying contributors directly from the protocol, I could see it experimenting with a different payout mechanism more like what you’re describing.

Thank you for the reply @s_ben I think I have a handle on this. I think my question was a curiosity rather than more of some ‘issue’. I see there would be multiple ways to handle it but the solutions introduce more complexity. I generally like things to be simple. Definitely a @rich.brown and @LongForWisdom and foundation thing to solve.

One could go the reverse route and basically do what you suggest (taking the payments for those excluded and/or opted out and using them to boost everyone who has opted in) to make a kind of fixed reward spread over whomever has opted ‘in’. The alternative is to simply let the payments for those that have opted out to ‘go back to the bank’ so to speak and then just adjust payouts until forum activity reaches a level and quality everyone likes.

Which brings me to another question. You wrote that SourceCRED itself had to optimize the Grain payouts until is reached the “sweet spot” where contributors started contributing more regularly.

What was the science or model SourceCRED used to determine this? I am wondering if this was a subjective metric or one defined by the SourceCRED model itself (say something like a good spread of CRED) or something like number of posts/user or ?? Maker as a community harps on using data and science to optimize this so my question is what scientific tools are available as a performance metric to optimize to these ‘community defined (and possibly changing) sweet spots’… As one who trains operators to tune beam lines we tend to have a few rules:

  1. Don’t molly-coddle the beam (i.e. when you are tuning make an observable change)
  2. Make sure you have a good signal to tune on. (e.g. tuning a system beyond noise levels is usually not possible or easy). Also pay attention to system response latency and reproducability. Everything has a delay and some tuning devices don’t reproduce or have varying types of hystersis.
  3. Always go past the peak to confirm it and then go back to on peak with the tuning control so that you know you are optimized to the peak at the time.
  4. A good tuner is faster at 1-3 for every knob/control because one has to often tune 10’s if not thousands of devices and the faster one can explore the total space the better the resulting tune generally.
  5. Finally pay attention to what you are tuning and the general working ranges. Too often tuners would grab knobs they shouldn’t and we would end up with ‘whacky’ tunes that needed to be ‘fixed’ again to put certain controls back into their expected working ranges.

So I am curious when it came to SourceCRED doing it’s own optimization what signals you guys were using and what knobs were you changing to see actual changes to the signal (in effect tuning the system to optimize it) and then finally what the optimization looked like for SourceCRED itself.

Thanks in advance for any insight and the work you guys are doing. I think these are really interesting systems and tools to manage virtual communities around productive discourse.

1 Like

I see what you mean here. Going to quibble on the word ‘bosting’ just in case Maker decides to use it down the line. In SourceCred, boosting specifically refers to a mechanism where an actors in the system decide to stake/burn Grain on a contribution, thereby gaining a share of its future Cred. It thus creates a “prediction market on ideas”, incentivizing more active participation than just distributing rewards according to Cred scores.

I wouldn’t describe the process SourceCred used as science, per se. But it did try and do it as systematically as possible, in a controlled and responsible way. As seen in the record of payouts below, SourceCred started by distributing $500 worth of Grain a week. Then increased that each week (with some variation corresponding to peaks and valleys in output) until it began to see “material redemptions” (contributors selling Grain (at first nobody was selling)) and our Temporary Benevolent Dictator @decentralion saw contributors start to contribute in a more “full time” capacity, more confident in contributing as a viable path forward. That happened about 11 weeks in when we hit $15,000/week worth of Grain.

We stayed at $15k until a couple months ago, when we upped that to $20k/week.

So, definitely not as scientific as tuning laser beams :stuck_out_tongue: But the results and continued growth of the SourceCred community have us confident enough in the approach to recommend it. I could definitely see Maker getting more scientific and rigorous in its experimentation if it decided to continue using SourceCred and further integrating it. Lots of brainpower here, and the design space is large enough to allow for the expression of knowledge from numerous fields (finance, management, decentralized governance, machine learning, even physics probably).