<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>jmc</title>
        <link>https://paragraph.com/@jmcook</link>
        <description>undefined</description>
        <lastBuildDate>Fri, 03 Apr 2026 17:45:48 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[The limits of Sybil defense (and how composability might help)]]></title>
            <link>https://paragraph.com/@jmcook/the-limits-of-sybil-defense-and-how-composability-might-help</link>
            <guid>GWZaDbN982CZP3ojVehP</guid>
            <pubDate>Tue, 20 Sep 2022 08:05:52 GMT</pubDate>
            <description><![CDATA[Do Sybil&apos;s dream of electric sheep?Do Androids Dream of Electric Sheep tells the story of a detective tasked with eliminating humanoid "replicants" that are almost indistinguishable from natural humans. They do this using a system of tests, including an instrumented interview that look for subtle "tells" such as a limited ability to make complex moral judgments in hypothetical scenarios. Sybil defenders are similarly tasked with distinguishing real and virtual humans in a mixed populatio...]]></description>
            <content:encoded><![CDATA[<h2 id="h-do-sybils-dream-of-electric-sheep" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Do Sybil&apos;s dream of electric sheep?</h2><p>Do Androids Dream of Electric Sheep tells the story of a detective tasked with eliminating humanoid &quot;replicants&quot; that are almost indistinguishable from natural humans. They do this using a system of tests, including an instrumented interview that look for subtle &quot;tells&quot; such as a limited ability to make complex moral judgments in hypothetical scenarios. Sybil defenders are similarly tasked with distinguishing real and virtual humans in a mixed population where they are difficult to tell apart. They too look for subtle &quot;tells&quot; that give Sybils away. Sometimes the Sybil signals are obvious and unambiguous, sometimes they are not. The additional complication for Sybil hunters is that the entire population exists in a digital space where a human&apos;s physical existence cannot be proven by their presence- it can only be demonstrated using forgeable proxies. Reliably linking actions to identities is therefore a subtle science that pieces together multiple lines of evidence to build a personhood profile.</p><p>One such line of evidence is proof that a person has participated in certain activities that would be difficult, time consuming or expensive for someone to fake. Gitcoin Passport is used to collect approved participation &apos;stamps&apos; and combine them into a score that acts as a continuous measure of an entity&apos;s personhood. Another line of evidence is the extent to which an individual&apos;s behaviour matches that of a typical Sybil. There are many telltale actions that, when taken together, can be used as powerful diagnostic tools for identifying Sybils. Machine learning algorithms can quickly match an individual&apos;s behaviour against that of known Sybils to determine their trustability, like an automated Voight-Kampff test. A high degree of automation can be achieved by ensuring Gitcoin grant voters, reviewers and grant owners meet thresholds of trustability as tested proactively using Gitcoin Passport evidence and retrospectively using machine learning behaviour analysis. An adversary is then forced to expend a sufficient amount of time, effort and/or capital to create virtual humans that fool the system into thinking they are real. As more and more effective detection methods are created, adversaries are forced to invest in more and more human-like replicants.</p><h2 id="h-plutocratic-tendencies" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Plutocratic tendencies</h2><p><em>Cost-of-forgery</em> is a concept aiming to create an economic environment where rational actors are disincentivized from attacking a system. One way to manipulate the environment is simply to raise the cost of attack to some unobtainable level, but without excluding honest participants. The problem is that simply raising the cost really just reduces down to wealth-gating. This creates an a two-tier community - people who can afford to attack and people who can&apos;t. There is also a risk that the concept bleeds into wealth-gating participation, not just attacks, which would unfairly eliminate an honest but less-wealthy portion of the community (i.e. increasing the cost of demonstrating personhood for honest users as a side effect of increasing the cost of forgery for attackers). To some extent, this is also the case with proof-of-stake: attackers are required to accumulate and then risk losing a large amount of capital in the form of staked crypto in order to unfairly influence the past or future contents of a blockchain. For Ethereum proof-of-stake the thresholds are 34%, 51% and 66% of the total staked ether on the network for various attacks on liveness or security - tens of billions of dollars for even the cheapest attack. The amount of ether staked makes the pool of potential attackers small - the pool is probably mostly populated by nation states and crypto deca-billionairres.</p><p>For a proof-of-stake or cost-of-forgery system to be anything other than a plutocracy there must be additional mechanisms in place other than raising the cost of attack. An attack has to be irrational, even for rich adversaries. One way an attack can be irrational is to ensure the cost of attack is greater than the potential return, so that an attacker can only become poorer even if their attack is successful. Ethereum&apos;s proof-of-stake includes penalties for lazy and dishonest behaviour. In the more severe cases individuals lose their staked coins and are also ejected from the network. When more validators collude, the punishments scale quadratically.</p><p>There are also scenarios where rich adversaries might attack irrationally, i.e. despite knowing that they will be economically punished - either because they are motivated by chaos more than by enrichment, or because the factors that make their behaviour rational are non-monetary or somehow hidden (political, secret short positions, competitive edge, etc). These scenarios can overcome any defenses built in to the protocol because it only really makes sense to define unambiguous coded rules for rational adversaries.</p><p>The two primary lines of defense in Gitcoin grants are retrospective squelching and Gitcoin Passport. Users prove themselves beyond reasonable doubt to be a real human using a set of credentials a community agrees are trustworthy. They are then more likely to survive the squelching because they behave more like humans than Sybils. The problem, however skillful the modelling becomes, is that being provably human does not equate to being trustable, nor is a community of real humans immune from plutocratic control - rich adversaries could bribe or use their capital to coerce verifiable human individuals to act in a certain way. An example of this is airdrop farming - a suitably wealthy attacker could promise to retrospectively reward users who vote in favour of their Gitcoin grant in order to falsely inflate the active user-base and popularity of the grant in the eyes of the matching pool. A simpler example is a wealthy adversary simply paying users directly to verify their credentials and then vote in a particular way.</p><p>It is impossible to define every plausible attack vector into a coded set of rules that can be implemented as a protocol, not least because what the community considers to be an attack might be somewhat vague and will probably change over time (see debates on &quot;exploits&quot; vs &quot;hacks&quot; in DeFi- when does capitalizing on a quirk become a hack, where should the line be between unethical and illegal?). This, along with the potential for attackers to outpace Sybil defenders and overcome protocol defenses, necessitates the protocol being wrapped in a protective social layer.</p><h2 id="h-social-defenses" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Social defenses</h2><p>There has to be some kind of more ambiguous, catch-all defense that can rescue the system when an edge-case-adversary fools or overwhelsm the protocol&apos;s built-in defenses. For Ethereum, this function is fulfilled by the social layer - a coordinated response from the community to recognize a minority honest fork of the blockchain as canonical. This means the community can rescue an honest network from an attacker that is rich enough to buy out the majority stake or finds a clever way to defraud the protocol.</p><p>For a Sybil resistance system, it would probably have to be a social-layer backstop too, because only humans have the subjective decision-making powers to deal with the kind of &quot;unknown unknown&quot; attacks that can&apos;t be anticipated. By definition, protocol defenses close known attack vectors, not the hidden zero-day bugs that spring up later. For a Sybil-defense system it would be manual squelching of users or projects that have acted somehow dishonestly in ways that have not been detected by the protocol but are in some way offensive to the community as a whole.</p><p>The danger here is that even with a perfectly decentralized and unambiguous protocol, power can centralize in the social layer creating opportunities for corruption. For example, if only a few individuals are able to overrule the protocol and squelch certain users while promoting others those individuals naturally become targets of bribes or intimidation or they themselves could manipulate the round to their advantage.</p><p>Therefore, there needs to be some way to impose checks and balances that keep the social coordination fair. There is a delicate balance to strike between sufficiently decentralizing the social layer and exposing it to its own Sybil attacks where the logic could become an infinite loop - to protect against attacks that circumvent the protocol defenses we need to fall back to social coordination which itself needs protecting from Sybil&apos;s using protocol rules that Sybil&apos;s can circumvent meaning we fall back to social action which itself needs protecting...ad infinitum.</p><p>It is still not completely clear how a social-rescue would take place on Ethereum, although there have been calls to define the process more clearly and even undertake &quot;fire-drill&quot; practices so that a rapid and decisive action can be taken when needed. Anti-Sybil systems and grant review systems such as Gitcoins could explore something similar.</p><h2 id="h-anti-sybil-onions" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Anti-Sybil onions</h2><p>The pragmatic approach to Sybil defense is to create an efficient protocol that can deal with the majority of cases quickly and cheaply, then wrap that protocol in an outer layer of social coordination. This social layer should be flexible enough to quickly and skillfully handle unexpected scenarios. However, to keep the scial coordination layer honest, it needs to be wrapped in its own loose protocol.</p><p>From the inner core to the outer layers the protocols become more loosely defined and subjective, closer to the core the protocol should be sufficiently precise as to be defined in computer code. There will be some optimum number of layers that will emerge organically to produce a system that is sufficiently robust to all kinds of adversarial behaviour.</p><p>To make this less abstract, the core in-protocol defenses could include automated eligibility checks, retroactive squelching of users identified as Sybils by data modelling, and proactive proof-of-humanity checks against carefully tuned thresholds. This alone creates a community of reviewers, owners and donors that are quite trustable. The social wrapper should be a trusted community that can handle war-rooms and rapid-response decision making for scenarios that are not well handled by the core protocol. One way to do this while protecting against centralized control is to use delegated stake so that the community &quot;votes in&quot; stewards to an emergency response squad they trust to act on their behalf. This will be self-correcting because the community will add and remove stake from individual stewards based on their behaviour. These stewards need a standard-operating procedure so that they can spring to action immediately when an attack is detected, which can be crowdsourced - effectively adding a second protocol and social layer to the anti-Sybil onion.</p><p>The benefit of this onion approach is that it allows the great majority of attacks to be neutralized efficiently by the in-protocol defenses, but allows for subjective responses to edge-case attacks. It is impossible to defend against the entire attack space, but this approach offers a community-approved route to pragmatic out-of-band decision making when in-protocol defenses are breached or some edge case arises.</p><h2 id="h-outlook" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Outlook</h2><p>Tackling Sybil defense across Gitcoin grants using a monolithic &quot;global&quot; system will necessarily bump up against these issues. One option to overcome this is to break Sybil defense down into composable units that can be tuned and deployed by individual sub-communities instead of trying to construct a Sybil panopticon that works well for everyone. It wil be much easier to construct layered anti-Sybil onion systems for individual subcommunities than trying to tune a single monolithic system to work for everyone. This is the approach Gitcoin intends to take in Grants 2.0. A well-defined, easily accessible set of tools and APIs that can be used to invoke Sybil defense within the context of a single project not only allows the Sybil-defense tuning to be optimized for a specific set of users but also empowers the community to control their own security. The challenges then become how to share tools, knowledge and experience across users so they don&apos;t continually re-invent the wheel. We discussed this in some detail in our Data Empowerment post. Decentralizing Sybil defense via a composable set of tools is also an opportunity to crowd-source a stronger defensive layer via communty knowledge sharing and parallel experimentation.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Scaling Gitcoin grant reviews]]></title>
            <link>https://paragraph.com/@jmcook/scaling-gitcoin-grant-reviews</link>
            <guid>1m5Okd4AxbNWhcva6ytR</guid>
            <pubDate>Thu, 08 Sep 2022 10:51:25 GMT</pubDate>
            <description><![CDATA[Right now grants are reviewed by a small set of highly trusted individuals within Gitcoin who have built knowledge and mutual trust through experience and discussion. This optimizes for accuracy but at the cost of centralization, high cost, low resilience to reviewers getting hit by buses and a low limit on the number of individuals meeting some trust threshold is a blocker on this model scaling to large numbers of grants. The challenge is to build a protocol that allows grant reviewing at sc...]]></description>
            <content:encoded><![CDATA[<p>Right now grants are reviewed by a small set of highly trusted individuals within Gitcoin who have built knowledge and mutual trust through experience and discussion. This optimizes for accuracy but at the cost of centralization, high cost, low resilience to reviewers getting hit by buses and a low limit on the number of individuals meeting some trust threshold is a blocker on this model scaling to large numbers of grants. The challenge is to build a protocol that allows grant reviewing at scale to be very fast, very cheap and very accurate. Achieving any two of these is easy (to be cheap and fast, automate completely; to be accurate and fast, pay high fees to trusted reviewers; to be cheap and accurate allow well paid trusted reviewers to work slowly) but optimizing across all three requires more sophisticated protocol engineering.</p><p>In the broadest terms, there are two levers available for reducing the cost, increasing the speed and maintaining the accuracy of grant reviews. These are a) decentralization and b) automation. Decentralization grows the pool of reviewers to pace with the growing pool of grants, moving responsibility from a small pool of trusted reviewers to a larger community - i.e. it makes more humans available to do reviewing work. Automation takes mechanistic tasks out of the hands of humans and does them quickly with a computer instead. Both of these approaches offer important improvements but come with limitations that must be managed. For example, decentralizing naively by simply opening up the reviewer pool risks reducing the overall trustability of the pool because adversarial reviewers will participate to skew the grant reviews in their favour or low-competence reviewers will degrade the review quality. Similarly, automation can deliver efficiency gains to the reviewers but too much automation down-regulates nuanced, subjective grant decisions that can only really be made by skilled human reviewers.</p><p>A perfect grant reviewing protocol finds the optimal balance between these factors and enables grants to be reviewed with the optimal balance of speed, accuracy and cost. The system also needs to be configurable and manageable for individual grant owners with minimal reliance upon any central provider. There has been substantial R&amp;D within Gitcoin into how such a system could be built. This post will provide an overview of that R&amp;D and the current roadmap.</p><h2 id="h-what-does-a-good-system-look-like" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">What does a good system look like?</h2><p>A good system for reviewing Gitcoin grants must:</p><ul><li><p>Benefit from economies of scale - the cost must not scale proportionally to the number of grants</p></li><li><p>Maintain review quality even when the number of grants increases</p></li><li><p>Enable all reviews in a round to be reviewed rapidly - the time taken to compelte a round must not scale proportionally to the number of grants</p></li><li><p>Have a UX that encourages project leads to take ownership of their own review system configuration and management.</p></li><li><p>Avoid creating game-able incentives and Sybil vulnerabilities.</p></li></ul><p>If there is one concept that has outsized influence on our ability to build such a system, it is <strong>composability</strong>. For Gitcoin grant reviews this means creating a set of tools that are available to be switched on/off and tuned specifically to the needs of individual grants. A composable set of grant review and Sybil defense tools would enable grant/round owners to configure the reviewing environment to their specific needs and the priorities of their users, while encouraging them to take ownership of their own review system. Allowing grant/round owners with domain expertise to make their own decisions about the context of their own grant round decentralizes the rule-making and distributes it across the actual user-community rather than having itcentralized under Gitcoin control. It also likely promotes effective scaling to large numbers of grants because reviewing is handled by sub-communities, effectively parrallelizing the work. In the composable paradigm, larger numbers of grants could be handled by more subcommunities, equivalent to a computer that dynamically adds more processing cores as the the size of some parallelizable task increases.</p><h2 id="h-how-can-we-build-this-sytem" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">How can we build this sytem?</h2><p>Outsourcing work to computers reduces the cost of reviews. This has to be balanced against the naunced, subjective decision-making that can only be made by humans, so the challenge is to incorporate the right amount of automation that doesn&apos;t compromise review quality. The question then becomes: which tasks can safely be automated?</p><h3 id="h-objective-reviews" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Objective reviews</h3><p>One task that can be automated is a simple check that grants meet the basic eligibility requirements. This can be achieved using a simple, standardized questionnaire given to grant owners at the time of submissions where the answers can be evaluated against a set of conditions, for example:</p><pre data-type="codeBlock" text="The grant is VC funded: yes/no
The grant has a token: yes/no
Is the owner account at least 100 days old?: yes/no
...
"><code>The grant <span class="hljs-keyword">is</span> VC funded: yes<span class="hljs-operator">/</span>no
The grant has a token: yes<span class="hljs-operator">/</span>no
Is the owner account at least <span class="hljs-number">100</span> <span class="hljs-literal">days</span> old?: yes<span class="hljs-operator">/</span>no
...
</code></pre><p>Some of these answers can be verified easily using Gitcoin data. The problem is how to determine whether the grant owner has lied about less easily-verifiable questions. One option is to manually review the answers, but this reinstates the human work that the automated eligibility checks were supposed to remove. Another option is to have a whistleblower reward so that grant owners within a round are incentivized to call out peers who have dishonestly characterized their projects, with the equivalent of &quot;slashing&quot; dishonesty being removal from the round or capping their matching pool rewards. It must be irrational for a grant owner to lie about their project&apos;s properties. An initial sift of grants that eliminates those that do not meet basic eligibility would cut the number of grants that make it to the review stage by at least 50% (based on data from previous rounds) translating into large cost savings. This eligibility check can be thought of as an initial &quot;objective review&quot; - it asks whether a grant meets a set of defined criteria and ejects those that do not from the round before human reviewers have been involved. Additional protection can come from incentivized whistleblowing. After the objective review the grant will undergo a subjective human review. This requires human reviewers to be assigned to the grant.</p><h3 id="h-subjective-reviews" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Subjective reviews</h3><h4 id="h-reviewer-selection" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Reviewer selection</h4><p>It is also possible to automate the selection of reviewers that are assigned to a particular grant. Review software like Ethelo can pick reviewers from a central pool and assign them to grants that match their preferences, expertise etc, relieving this responsibility from round owners. This seems straightforward, but there must also be a trust layer built into this process to prevent the reviewer pool being populated by low-competence or adversarial reviewers. This can also be automated to a large extent using Gitcoin Passport. Gitcoin Passport is a collection of &quot;stamps&quot; that act as proof of personhood and evidence of expertise or experience in a certain domain. An automated scan of Gitcoin Passport stamps can be used to create a reviewer profile that can then be used to eject suspicious reviewers from the pool or reduce their trust score. Conversely, metrics derived from Gitcoin Passport stamps can be used to verify reviewers as non-Sybil and give them scores that demonstrate their suitability to review certain grants. Gitcoin Passport can be integrated into Ethelo in this way to automate some important and laborious parts of the reviewer selection process, reducing the cost per grant. Not only will the accuracy not diminish, it may well even improve because of the substantial amount of hidden data science underpinning Gitcoin Passport&apos;s stamps and trust weighting functions.</p><p>This automation can also provide composability and configurability to grant owners by allowing them to tune the Gitcoin Passport stamps that are of itnerest to them and the criteria they think could be used to identify good grant reviewers. The UX for this could be very simple - a one-click option to activate Gitcoin Passport and then some simple input fields to define some terms of interest. Behind the scenes, these tags can then link to specific stamps. For example:</p><pre data-type="codeBlock" text="GRANT OWNER: Reviewer configuration:

- Sybil threshold: 0-85 (how sure do you want to be that your reviewers are honest humans?)
- Web3 score: 0-85 (how much web3 experience should your reviewers have?)
- Domain expertise: [ text  ]  (what keywords should we look for across the reviewer pool)
- How many reviewers should see each grant?: 2-4 (cost increases per additional reviewer. Default==3)
- etc...
"><code>GRANT OWNER: Reviewer configuration:

<span class="hljs-operator">-</span> Sybil threshold: <span class="hljs-number">0</span><span class="hljs-number">-85</span> (how sure do you want to be that your reviewers are honest humans?)
<span class="hljs-operator">-</span> Web3 score: <span class="hljs-number">0</span><span class="hljs-number">-85</span> (how much web3 experience should your reviewers have?)
<span class="hljs-operator">-</span> Domain expertise: [ text  ]  (what keywords should we look <span class="hljs-keyword">for</span> across the reviewer pool)
<span class="hljs-operator">-</span> How many reviewers should see each grant?: <span class="hljs-number">2</span><span class="hljs-number">-4</span> (cost increases per additional reviewer. Default=<span class="hljs-operator">=</span><span class="hljs-number">3</span>)
<span class="hljs-operator">-</span> etc...
</code></pre><p>There could therefore be a continuum of automation from entirely manual reviewer selection to heavily automated. Each round owner could configure their round with some lower limit on the amount of automation (e.g. &quot;grants in this round must turn Ethelo ON&quot;). Within the bounds set by the round owner, the grant owners could then tune their reviewer settings to their own specific grant. This approach removes a lot of the friction, and cost, associated with reviewer assignment and trust assessment.</p><h4 id="h-reviewing" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Reviewing</h4><p>The previous section showed how the reviewer assignment could be improved using Gitcoin Passport and Ethelo. There are also enhancements that can be made to the reviewing itself. Currently, the actual reviews are a particular bottleneck because they are conducted by a small number of known, trusted individuals with experience reviewing across multiple rounds. However, to scale to more grants, and to demonstrate credible neutrality, we need more reviewers. This raises two important questions:</p><ol><li><p>how can we incentivize users to become reviewers?</p></li><li><p>how can we ensure the reviewers in the pool are trustworthy and competent?</p></li></ol><p>Reviewer incentives can be monetary or non-monetary. In GR14 FDD created a model that explored how reviewers could be optimally organized and remunerated for their work. One outcome from this was a scheme that randomly assigned three eligible reviewers with different trust levels to each grant, always ensuring that the total trust level exceeds some threshold. The reviewers can then be paid in $GTC or another community token with the value awarded scaling with their trust level, incentivizing reviewers to participate to get the experience and Gitcoin Passport stamps that raise their trust scores to unlock greater rewards.</p><p>Non-monetary incentives have also been shown to be powerful in many communities, especially achievement-gated POAPs and NFTs which could themselves factor into the Gitcoin Passport trust scores or be used directly as Passport stamps. An empiricial experiment is currently underway that will test the performance of reviewers incentivized in different ways in an Ethelo mini-round and also explore what data is most valuable to collect from reviewers and how that data can best be managed and mined.</p><p>Gitcoin Passport provides the initial trust score that can be used to give confidence that the reviewers in the pool are non-Sybil and also to partition the reviewer pool into trust levels and knowledge domains. This information can be used to assign reviewers to particular grants according to the preferences of individual grant owners. At the same time, participation increases the trust score of reviewers, especially when they are rewarded with POAPs and NFTs that can be factored into their Gitcoin Passport. This flywheel effect could rapidly grow a community of trusted reviewers for individual communities, for Gitcoin and across Web3 as a whole because the reviewer&apos;s credentials would travel with the individual as a composable, transferable set of attributes. Since getting started as a reviewer is permissionless, anyone can jump onto this flywheel and become a trusted individual - the only criteria is active, honest participation. The transferable social reputation and activity-unlocks offered by the POAPs and NFTs might turn out to be more valuable to users than the monetary rewards, which could lead to further cost efficiencies. The positive feedbacks and community-building that can emerge from non-monetary rewards are also desirable because they circumvent to some extent the risk of inadvertently embedding a plutocracy where the barrier to Sybil attacks and dishonest review practises is just some threshold of wealth. It also encourages individuals to take the necessary actions to increase their skill and trustability over time while also attracting new entrants to the reviewer pool, creating a flywheel of positive behaviours that can diffuse across the web3 space.</p><p>The idea of credential transferability is very interesting and draws upon some of the core ethos of web 3 - one could imagine interesting symbioses where platforms that benefit from Gitcoin funding encourage their users to participate in reviews by giving priveleges to their own users if they have a certain trust level or particular set of Gitcoin Passport stamps, and reciprocally provably honest usage of the platform is recognized by Gitcoin Passport in the form of stamps or trust score.</p><h3 id="h-appeals-approvals-and-qc" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Appeals, approvals and QC</h3><p>The proposed grants system is ultimately a consensus protocol - it uses a set of embedded rules to create a situation where the community can agree that grants have been honestly reviewed by human reviewers to an acceptable degree of confidence. However, the protocol requires a social coordination layer underpinning it that can act as a fallback when a) the protocol misjudges a review and b) when an attack successfully breaches the protocol&apos;s in built defenses. The simplest model for this is to have some trusted arbiter that polices the grant review process and settles any disputes. However, this is a centralized model, which raises risks of that arbiter being hit by a bus, being evil, or being bribed. Much better is a decentralized mechanism where a community can collectively settle disputes and whistleblow on dishonest behaviour. Some ideas include an iterative mechanism where grants that are disputed are reviewed by another random selection of reviewers, perhaps with a higher overall trust score than in the previous round, this time with access to the dispute details submitted by the grant owner. The cost of this could be covered by the grant owner if the dispute fails and by Gitcoin if the dispute succeeds, creating an economic disincentive for dishonest grants and incentive for honest grants to appeal an unfair result. Other ideas include settling appeals on an open forum with a Gitcoin Passport trust-score weighted vote determining the outcome after a discussion period. These mechanisms need not apply only to appeals, but to any appeal including dishonest grants or reviewers that are identified by other members of the community (whistleblowers).</p><h3 id="h-retrospective-insights-and-round-on-round-learning" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Retrospective insights and round-on-round learning</h3><p>Incorporating more software, such as Ethelo, in the grant reviewing process also enables more data capture that enables round-on-round learning and refinement of the system. A challenge for Gitcoin right now is making its data openly available and accessible but also sufficiently anonymized. In a previous post, we outlined plans for decentralizing Gitcoin data and empowering the community to access it and use it in creative ways. This is likely to be increasingly important as Gitcoin grants scales to more users, as we cannot assume insights from smaller rounds under the original centralized grants system hold true for larger grant rounds managed under Grants 2.0. It makes good sense that as the number of grants, grant owners, reviewers and the size of the data captured all grow, so should the data science community tasked with mining knowledge and refining the system.</p><p>A few examples of how this data analysis could feedback into the grants system are a) identification of metrics and indicators that could act as early warning systems for &quot;bad&quot; grants or reviewers; b) indicators that can aid in Sybil detection and refining Gitcoin Passport trust scores.</p><h3 id="h-some-composability-risks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Some composability risks</h3><p>One could imagine a system where the owner of each grant submitted has freedom to easily configure and manage how their project is reviewed. However, offering complete freedom to customize opens the door to nefarious actions by grant owners. For example, a grant owner could remove any quality control and encourage Sybil-type donations to their grant. On the other hand, there could also be cartel risk, where round owners tune their eligibility requirements to be extremely high or very specific to favour certain individuals, creating exclusivity that can then be gamed.</p><p>The ideal situation would be to have some incentive structure that encourages grant owners to tune their review configuration to be more stringent rather than less. One could imagine some incentivized inter-community moderation services that cross-calibrate review quality. Pragmatically, at least at the start, protection from composability-gaming would more likely come in the form of some simple binary eligibility criteria that enable a round owner to programatically filter out ineligible grants. This would free up human reviewers for assessing valid grants that need subjective, nuanced human consideration.</p><h2 id="h-summary" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Summary</h2><p>Gitcoin is growing and it needs a grant reviewing system that can scale. At the moment, grant reviews are too centralized and rely too heavily on the work of a few trusted individuals. To scale we must increase the number of individuals participating in grant reviews while also outsourcing as much of the work of each individual to computers as possible. At the same time, we need the system to be adaptable to the needs of individual groups of users so that the grant reviewing can be community-tailored at the most granular level possible. The vision is that a grant owner can submit their grant to a round that has been specifically configured to meet their specific needs, and they have agency in determining precisely what that configuration looks like. Their grants are then reviewed by a set of reviewers from the community pool with the requisite knowledge and experience. Grant owners can then be satisfied that their round was as safe, fair, aligned with their community&apos;s values and completed quickly. Delivering this vision requires a composable stack of accessible reviewing tools and systems of incentives that encourage those tools to be adopted and used honestly. This article has outlined what some of those tools might be and how they might be deployed in Grants 2.0, and also what attack vectors might need to be managed when composability is introduced into the reviewing process.</p><p>Moving forwards from here requires weighing up the options for composable reviewing tools, making decisions about what to implement in the coming rounds and how to monitor their performance. There are experiments currently underway with test populations of reviewers that should inform these discussions.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Decentralizing Sybil defense using Gitcoin data]]></title>
            <link>https://paragraph.com/@jmcook/decentralizing-sybil-defense-using-gitcoin-data</link>
            <guid>7HR3Z5jzy2S7052bfxsG</guid>
            <pubDate>Wed, 07 Sep 2022 15:41:35 GMT</pubDate>
            <description><![CDATA[It seems like almost every non DeFi project, from DAOs to gaming and from airdrops to the support for public goods provided by Gitcoin and others, rely on some notion of identity in their decision making. And yet - all too often such systems are attacked by actors seeking to amplify their votes, game the games, farm the airdrops, and even divert public goods funding to their own projects. Such attacks that rely on the impersonation of thousands of seemingly independent identities that are act...]]></description>
            <content:encoded><![CDATA[<p>It seems like almost every non DeFi project, from DAOs to gaming and from airdrops to the support for public goods provided by Gitcoin and others, rely on some notion of identity in their decision making. And yet - all too often such systems are attacked by actors seeking to amplify their votes, game the games, farm the airdrops, and even divert public goods funding to their own projects. Such attacks that rely on the impersonation of thousands of seemingly independent identities that are actually orchestrated as one are called sybil attacks, and they are a growing threat across many sectors of web3. This is because fungible tokens can easily be divided across multiple wallets - a strategy that can enable one individual to divide into multiple virtual personas that each have some voting power. If the cost to create a new persona is lower than the reward that new persona can generate, users are incentivized to divide themselves. The ability to mount Sybil attacks undermines the one-person one-vote model that many projects would ideally implement.</p><p>Because sybil attacks are a big problem across Web3, many projects have independently developed their own systems of Sybil defense. This has led to a useful diversity of methods but not many composable, transferable tools. This has caused lots of developers to spend a lot of time and money reinventing the wheel, or missing opportunities to develop synergistically across projects. Composability and openness are core principles of web3, but Sybil defense has so far remained strangely siloed. Web3 natives have the decentralized infrastructure, tooling and open-source ethos required to solve this problem. An explicit intention to tap into the power of web3 native communities to build open, composable tools together could lead to a much healthier, more Sybil-resistant industry.</p><p><strong><em>Building together, in the open to defend the space against bad actors is the natural state for web3 natives!</em></strong></p><h2 id="h-gitcoins-sybil-problem" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Gitcoin&apos;s Sybil problem</h2><p>Gitcoin is a platform for public goods funding that currently uses a quadratic voting mechanism to boost donations to individual projects with funds from a matching pool. The quadratic voting mechanism aims to direct more funds from the matching pool to more popular projects. Popularity is measured as the number of individuals voting for a project as opposed to simply the number of tokens voting. Sybil attackers can therefore divide their donation across multiple addresses to boost the amount of funds Gitcoin allocates from the matching pool. The effect is to skew Gitcoin&apos;s view of the community preferences for each project and cause misallocation of funds to projects that aren&apos;t as popular as they appear.</p><p>In the most recent grants round, up to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gov.gitcoin.co/t/gr14-governance-brief/11050">26% (&gt; 16k accounts)</a> of the total population of users was estimate to be Sybil - substantially greater than in previous rounds. There was also a notable increase in airdrop farming, where individuals are encouraged to create wallets and donate to projects in the expectation of a return in the form of a token airdrop. Regardless of whether the Sybils target the matching pool or future airdrops, they aim to maximize their return on investment by tricking the grants round into over-weighting a particular project.</p><p>Defending against Sybil attacks helps to realign capital allocation and distribution of real human votes. For Gitcoin the Sybil defense problem reduces down to:</p><p><strong><em>”Detect users that are gaming the system of quadratic funding and minimize their impact on capital allocation.”</em></strong></p><h2 id="h-the-current-sybil-defenses" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The current Sybil defenses</h2><p>Gitcoin currently uses two <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gov.gitcoin.co/t/closing-the-gap-between-fdd-and-gitcoin-passport-sybil-defenses/11218">Sybil defense strategies</a> that have been developed independently, in parallel. One is a retrospective &quot;squelching&quot; of accounts that a human-in-the-loop machine learning algorithm identifies as Sybil. The other is a proactive system that gathers evidence of humanity through passport stamps and uses this evidence to weight an individual&apos;s influence on the capital allocation.</p><p>The passport project is open source and is being developed by Gitcoin into a protocol that any project can adopt. In addition to passport, we are also interested in sharing with the broader community our machine learning, data grooming, data sets and other approaches to finding and countering sybil attacks.</p><h2 id="h-what-do-we-need-to-build" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">What do we need to build?</h2><p>Gitcoin passport and retroactive Sybil defenses need to be made interoperable, synergistic and applicable to small independent projects. As Gitcoin implements Grants 2.0 (which aims to give individuals ownership over the management of their own projects) it is critical that Sybil defense evolves into a toolset that can reasonably be deployed and configured frictionlessly for individual projects.</p><p>The two established approaches to Sybil defense, retrospective squelching and proactive scoring, resolve to two basic ideas: finding signals for bad actors (wallets with similar handle names, donation patterns, etc) and finding signals for good actors (passport stamps, high cost of forgery, personhood score etc). These concepts need to be the foundations upon which composable anti-Sybil tools are built. Can we develop light-weight tools that integrate these lines of evidence into Sybil detection in a way that is configurable to individual users and scales across many projects inside and outside of Gitcoin? What steps can we take now that can get us there as soon as possible?</p><h2 id="h-capitalizing-on-existing-data" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Capitalizing on existing data</h2><p>One of the primary features of blockchains is the immutable and open ledger of current and historical data. For Ethereum, every full-node stores the global state dating back at least 128 blocks, and older blocks are available by querying archive nodes. The majority execution client, Geth, currently requires more than 650GB of free disk space and grows by about 14GB/week for a default cache configuration. An archive node requires multiple TB of disk space and cannot be pruned. This is a blocker for most downstream users of the data, especially because searching through on-chain data directly can be slow, computationally expensive and unintuitive.</p><p>In addition to on-chain data, rich information about individual grants and users can be found in Gitcoin&apos;s own offchain data resources. This includes metadata about individual projects, inclusion criteria and conditions for specific actions, information about projects, Gitcoin passport scores, Sybil defense scores, etc.</p><p>Clearly, there is a huge vault of data, some of which resides on-chain and some off-chain, which can be mined in the service of Sybil defense. However, the bottleneck for now is that this data needs a substantial amount of cleaning, organizing and curating to make it accessible to the data scientists who can tease useful Sybil-signals out of it. This is an important pain point that is well-known in corporate data engineering. The data is also currently predominantly centralized under Gitcoin control, which is not only counter to Gitcoin&apos;s core ethos but also limits the community participation in data mining.</p><p>Dedicated efforts to build the infrastructure for data and intelligence sharing could go a long way to solving these pain points and maximizing the value users can extract from the available data. In the following sections we examine the infrastructure requirements for extracting the maximum value from this data resource and how it can be put to use for developing composable Sybil defenses.</p><h2 id="h-infrastructure" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Infrastructure</h2><h3 id="h-storage" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Storage</h3><p>The base infrastructure for a system that makes data more easily available to the community is <em>storage</em>. The easy but expensive and centralized option for this is to use cloud providers. A cheaper solution is to establish and maintain hardware storage internally to Gitcoin and make the data available to the community, but this just moves the centralized power to Gitcoin. The decentralized option is to make use of decentralized data platforms such as IPFS, Arweave, Filecoin, BitTorrent etc. These platforms generally require a user to &quot;rent out&quot; a portion of their local storage, sometimes in return for some form of tokenized reward.</p><p>It is not just a case of storing raw data though. What is more empowering is distributing cleaned data that can be readily analysed or potentially used directly as features in machine learning models, taking cues from platforms such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.featurestore.org/">Feature Store</a>. This kind of infrastructure would make open data much easier to share, especially combined with composable compute options.</p><h3 id="h-compute" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Compute</h3><p>Making data available is great, but the community could really level up if tools for extracting value from that data was composable too. For on-chain data the foundational tools for data exploration are in-client tracers and network analysis tools, but these require specialist technical know-how to use effectively and can be computationally expensive.</p><p>Thankfully, there are some third-party services such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://thegraph.com/en/">TheGraph</a> that wrangle that blockchain data into SQL-like databases that can be interrogated more easily and cheaply than the blockchain itself. This data can include interactions between individual accounts and contracts (e.g. voting in DAOs, delegations, staking) and ownership of digital assets (e.g. POAPs, NFTs, ENS, governance tokens) that can be used to build topologies of Web3 interactions and web3 &quot;fingerprints&quot; for certain (groups of) users that may reveal diagnostic signals for Sybil/non-Sybil users. This is a rich data resource that could be served by incentivizing a community of sub-graph builders who produce quick-access databases for useful data from the blockchain and make them available downstream. Dune Analytics is another provider of on-chain data - it is used to generate user-friendly dashboards for blockchain metrics that can include complex queries. These dashboards and the underlying data can then be shared easily between data analysts or delivered straight to end-users. The launch of Grants 2.0 will include the publication of a subgraph or collection of subgraphs that will expose the important on-chain data - an important step towards community-powered Sybil defense.</p><p>Jupyter Notebooks are an excellent option for serving user-facing applications that exploit offchain data. These are shareable, executable notebooks containing (usually) Python code in a run-time environment in the browser - a very accessible way to share model code and analysis. Notebooks can be written and shared by a community of analysts, allowing easy programmatic access to Gitcoin data. Notebooks could be modified and used as code-legos between different groups. Jupyter Notebooks are also well-suited to running in the cloud, making access to large computing resources relatively straightforward. This is important for machine learning and AI pipelines that require batch processing and large computational resources- such as those used by the SAD model.</p><p>To maximize the power of the stack, the data storage infrastructure should expose an API that can easily be called in Jupyter Notebooks or similarly shareable code or web-applications with decentralized front-ends. This could then lead to a composable data-science stack that includes onchain and offchain data and builds a community of analysts and developers that can continually improve upon and adapt each other&apos;s work. Extending this idea, one could imagine <code>pip install sybil-defense</code> in the future.</p><p>Ultimately, this work needs to feed into Gitcoin Passport so that there is a clear pipeline from community data science activity to Sybil defense. There are many useful Sybil-signals ossified in on-chain data, but it is not immediately clear how these translate into unambiguous flags that can be used in Sybil defense. There needs to be some middleware that translates insights from community-led data analysis into Gitcoin Passport stamps. This middleware needs to evaluate evidence from community data analysis in favour of some signal being diagnostic of Sybil behaviours and turn that into a stamp or an influence on a user&apos;s Trust Score.</p><h2 id="h-incentives" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Incentives</h2><p>A particular challenge is bootstrapping an active community. Strategies to incentivize users includes branding, marketing and monetary/non-monetary rewards for participation. Users should also be incentivized by the potential outcomes which will be valuable to project owners - specifically the improved user-experience of a composable set of anti-Sybil tools and the flexibility to apply them bespoke to their own project. Rewards could come in the form of a custom token on Ethereum or it could be bootstrapped using existing tokens such as $GTC. Non-monetary rewards could include POAPs and NFTs that could then become Gitcoin Passport stamps or features in a user&apos;s Gitcoin Passport trust score. There could also be other incentivization schemes such as data science competitions, bounties and dedicated grant rounds, leaderboards and other forms of gamification.</p><h2 id="h-outcomes" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Outcomes</h2><p>The ultimate goal of this focus on infrastructure is to build the most efficient and accessible rails for giving away our data for free. We can then encourage the community to jam on novel ways to use it in service of Sybil defense. Probably this includes generating models and insights from the data and returning them to the community as building blocks for more innovation, and especially translating data insights into Gitcoin Passport stamps.</p><p>The potential concrete outcomes from this are a more composable and more performant set of Sybil-defense tools. In the context of Grants 2.0 where project leads are responsible for managing their own grant round using a set of software tools, they could also take on responsibility for their own Sybil defenses. The data and the tools for analysing and making decisions from that data would be <em>available</em> and <em>accessible</em> such that they could be tuned specifically to the needs of an individual project and better serve their particular needs, rather than having a monolithic Sybil defense strategy as in the current grants system. Overall, the outcome is to empower the community to establish their own bespoke defenses.</p><h2 id="h-where-to-go-from-here" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Where to go from here</h2><p>Empowering the community with open data could be a substantial unlock for Gitcoin because it broadens participation in Sybil defense to the entire community. Instead of a dedicated team of individual being responsible for defending the entire space, the entire space becomes responsible for defending themselves and their peers.</p><p>The first step towards this goal is to define a roadmap for building an open data infrastructure that makes Gitcoin data openly available (which will be a natural part of launching Grants 2.0) and, critically, <em>accessible</em> and <em>useable</em> to as many people as possible. This means auditing the available data, devising a plan to organize and disseminate it, creating strong documentation, starting communities around data wrangling and analysis, and determining the incentive structure.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Closing the gap between Retroactive Sybil defense and Gitcoin passport]]></title>
            <link>https://paragraph.com/@jmcook/closing-the-gap-between-retroactive-sybil-defense-and-gitcoin-passport</link>
            <guid>v5tO4k5Fy0WknzqXf9lq</guid>
            <pubDate>Wed, 07 Sep 2022 15:28:37 GMT</pubDate>
            <description><![CDATA[Gitcoin aims to optimize capital allocation within a grants round (GR), primarily by preventing capital capture by Sybils. There are currently two independent systems in place for this that run in parallel. First, a multiplier ("Trust Score") is assigned to an individual&apos;s donation depending on their non-Sybil traits - the more likely they are to be a real human the more their donation is multiplied in the matching pool. This Trust Score is derived from evidence of personhood that a user...]]></description>
            <content:encoded><![CDATA[<p>Gitcoin aims to optimize capital allocation within a grants round (GR), primarily by preventing capital capture by Sybils. There are currently two independent systems in place for this that run in parallel. First, a multiplier (&quot;<strong>Trust Score</strong>&quot;) is assigned to an individual&apos;s donation depending on their non-Sybil traits - the more likely they are to be a real human the more their donation is multiplied in the matching pool. This Trust Score is derived from evidence of personhood that a user collects in their <strong>Gitcoin Passport (GP)</strong>. The other way is <strong>Sybil Account Detection (SAD)</strong> where accounts that are identified as potential Sybils by a human-in-the-loop machine-learning pipeline are &quot;squelched&quot; - i.e. ejected from the GR. As we move towards Grant 2.0 there is a need to optimize these processes and pivot towards a more composable Sybil defense system that can be tuned by individual grant owners to their own community&apos;s needs.</p><h2 id="h-differentiating-sad-from-gp" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Differentiating SAD from GP</h2><p>The fundamental difference between SAD and GP is that GP is proactive - it provides a continuous metric for a user <em>in advance</em> of them participating in a grant round and uses that information to define their impact. It takes into account the &apos;stamps&apos; a user has in their passport, each of which provide evidence that the user is a real human, and increments their weighting in the matching pool proportionally to the weight of evidence in their passport.</p><p>On the other hand, SAD retrospectively examines an individuals behaviours, generates a probability that they are a Sybil then applies a threhold to convert that probability into a binary Sybil/nonSybil outcome. SAD then retroactively removes Sybils from the round.</p><p>SAD and GP have been developed in parallel but mostly independently. Having two independently-developed Sybil defense systems operating separately but in parallel hints at a future where any number of systems can run in parallel and the overall trust score is a dot-product of each trust vector. However, at the same time, there are also opportunities for synergistic relationships between anti-Sybil systems. A good first step in identifying such synergies is to compare the outcomes from SAD and GP in the latest GR to see how closely aligned they are.</p><p>If both SAD and GP approaches to Sybil defense were perfect, they would silence the same accounts, and those accounts would all be Sybils. In reality there is a <strong>gap separating these two processes</strong> because each one is imperfect in its own unique ways.</p><p>In a well-tuned system, we might expect to see a positive relationship between the percentage of accounts identified as Sybil as the Trust score of those same accounts assigned by GP increases. The plot below shows the SAD likeliness score plotted against the GP trust score for GR14. There is a weak positive relationship but with significant deviations from a simple linear relationship. These deviations represent differences in the methods, error and opportunities for learning.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b2a04e5b8a39bff649b1328fa96a20cf43d43960c82b2651b393fd018bbb65d9.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Cross-calibrating between SAD and GP with the aim of “closing the gap” could improve the performance of both, identify areas of redundancy and potentially nudge us towards unifying them into a cheaper, more efficient optimization layer. The space separating the exclusionary logic of SAD and the inclusionary logic of GP is where the greatest opportunities for learning and optimization lie.</p><h2 id="h-proof-of-personhood" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Proof of personhood</h2><p>GP requires users to gather evidence of personhood in advance of participating in the grants round. This can include proof of engaging with a range of on and off-chain activities such as Twitter, Facebook, Proof-of-humanity, BrightID and several others. More evidence gives the user a greater score which in turn amplifies their influence in grant funding.</p><p>However, most Gitcoin users do not attempt to gather proof of personhood in advance, either because they are not sufficiently well incentivized to do so or because they are unaware of the benefits. While it might be expensive for Sybils to gather <em>lots</em> of robust evidence of personhood, it is still relatively cheap to gather <em>something</em>. This means many Sybils actually have higher trust scores than real users! In this case GP will invert the desired outcome of silencing Sybils relative to real users. A well-tuned system would make it very expensive for Sybils to gain outsized influence, but cheap for real users to prove their personhood.</p><p>However, it is still likely that those Sybils that sneak through with cheap passports may exhibit some Sybil behaviours that can be detected by a retrospective SAD algorithm, correcting the outcomes by squelching offending users. SAD, however, is very computationally heavy and also requires human evaluators to tune the model regularly. It is in a constant state of flux because of the cat-and-mouse nature of Sybil defense. This is a bit of a barrier to making SAD composable for Grants 2.0. There are a few possible routes to overcoming this. SAD could incorporate some heuristics that give obvious Sybil’s and non-Sybils an escape hatch to instant classification before they ever reach the SAD algorithm.</p><p>Ideally, heuristics are cheap and easy for real users to conform to, but expensive and awkward for Sybils. This dichotomy can be overcome in many cases by aiming for identifiers real people <em>already have</em> and are hard to replicate quickly. A great example of this is social media. Gitcoin donors are likely to have one or more of Twitter, Facebook and Google accounts, each of which are likely to have existed for some substantial amount of time and have a network with a transaction a history associated with it. For those users it is easy and cheap to connect their account and prove their personhood, but it is awkward and time-consuming for Sybils to create multiple accounts and generate an activity history for each one.</p><p>Strategic use of heuristics could lighten the computational load on the SAD algorithm and make it faster and cheaper to run. Some of the heuristics could come from GP. For example there are some markers of personhood that are prohibitively expensive for Sybils to gather, such as paid subscriptions, ownership of certain NFTs (especially non-transferable ones) or limited-run POAPs that had strict eligibility requirements (e.g. those given out at a specific event or for taking some specific action). The plot below identifies a cluster of accounts that provably hold membership in some specific DAOs. They have noticeably better performance in the GR than the majority of accounts.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/57251c44be241a9d7a60eb949d28b08bc8a8f514f1e458517abf22b1aa8e82a5.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>A paid subscription to Bankless DAO, or a top-tier LinkedIn profile, for example, cost upwards of $1000 - probably far too expensive to be purchased in bulk for the purpose of a Sybil attack. Non-transferable NFTs have been awarded for certain educational programs such as the Consensys Blockchain Developer Bootcamp - these were only awarded to paid attendees that passed several rounds of examination and are non-transferable, so would be very difficult for a Sybil to obtain, certainly in any meaningful volume. There have been POAPs for running validators on the Ethereum merge testnets - these are only awarded to addresses that provably ran validator nodes for some threshold amount of time before, during and after the testnet merges - it would be awkward and time-consuming for a Sybil attacker to spin up Ethereum validators to farm those POAPs. Another example that is already used by SAD is the edit distance (Levenshtein distance) between account names. Accounts with low edit distances might have an enhanced probability fo being Sybil because the low edit distance is often indicative of automatically generated accounts, especially so when the number of accounts linked by small edit distance are larger. Some threshold number of accounts with a certain edit distance could trigger automatic removal from the GR.</p><p>These type of indicators could fast-track a user to immediate non-Sybil status and maximum influence in the matching pool without needing to include them in the SAD algorithm. Equally, there are likely heuristics that can be used to fast-track users to disqualification too. This could act as an initial screening implemented as part of a “data cleaning” function before the main model executes.</p><p>While the previous section identified some examples of ‘binary’ heuristics that give immediate high confidence binary Sybil/non-Sybil tags, there are a wide range of others that can be used in combination with other evidence to influence the SAD model or GP trust score.</p><p>On-chain activities that cost gas do not scale well for Sybils because of the expense, but may well already exist for many Gitcoin users or be cheap enough on a single-use basis that it is affordable for real people, unaffordable for Sybils. On the other hand, social media accounts are cheap and easy to create, which is good for accessibility but probably provides weaker evidence of personhood because they are cheap to create fradulently.</p><p>The low cost of creation for social media accounts is already reflected in GP data. The chart below shows the relative popularity of social media versus on-chain versus proof-of-personhood verification in current GPs.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0ea22ac9100780774b46088af733f9f7dc8288e4ba10d73169c956260887891c.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Interestingly, the popularity is roughly inverse to the effect on the trust Bonus score. The following plot shows the combinations of verification methods that maximize the trust score. Google and Facebook accounts are included in the top performers, but only when they are alongside non-social on-chain or proof-of-personhood evidence. The best performing combination includes BrightID and ENS domain name ownership. Social-media only is not a very effective trust-booster.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c689b466fd71750bc036fa3a1ddf6ff774f29f60c7e835d4df86614fac44e752.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>It seems simple to weight on-chain or proof-of-personhood verification heavily in GP or SAD, but new users, who may also be new entrants to Web3 need to prove themselves as non-Sybil even though they have fresh wallets without much of a transaction history. Social media accounts are generally good for this, but there are likely populations of users who choose not to use social media and also keep their on-chain activity private for all kinds of valid reasons but still wish to participate in Gitcoin GRs. These people might be susceptible to being squelched or silenced in a GR.</p><p>SAD currently uses some GP data but also data from the existing centralized grants system as features in its machine learning pipeline. This gives more information to the model for making Sybil classifications but will need to be phased out to support Grants 2.0 because that same grant data will not exist. SAD also returns a binary outcome for a user, which is not very conducive to round-on-round growth and learning for new users and potentially quite unfriendly in edge cases where non-Sybil but low-competence users or users with an unusual profile that the model determines to be Sybils are ejected from the system altogether. A continuous measure that scales a user’s voice according to their trustability might provide a smoother experience for users and enable them to gradually grow their influence over successive rounds. Interestingly the ML model used by SAD already outputs a score between 0-1 representing the probability that a user is Sybil. This value is then thresholded to provide a binary outcome, but it doesn’t necessarily have to be. In a given GR one could imagine a GP stamp representing a user’s SAD score in the previous GR, if they participated. This makes proactive use of retrospective analysis and may incentivize good behaviour because users know that honest actions are rewarded in future rounds, and that Gitcoin has a memory for bad actions built into the passport.</p><p>A related idea is DonorDNA - a vector of Booleans for a given user where each position in the vector maps to a specific grant. The value is 0 if the user did not donate to the grant and 1 if they did. The vectors can act as an identifier for a specific user that can be compared with other users quantitatively using lexicographic metrics such as the Hamming distance between them. This gives an efficient way to compare donation profiles between users, especially if this can be combined with knowledge of the users’ Sybil/non-Sybil status so that a typical Sybil DonorDNA fingerprint can be established. The Hamming distance from that benchmark fingerprint could be used as a component in the Sybil-probability calculation and also be included as a field in the user’s Gitcoin Passport. Eventually, more degrees of freedom could be used in each position in the vector to give a very information-dense fingerprint in a user passport. The plot below shows a visual representation of the donorDNA vectors for accounts in GR14.</p><p>GP provides a way to get the right data information into the SAD model without relying on access to personally identifiable information about the users - the trust score abstracts that away. The trust scoring used by GP could be used as a “clean” feature to feed into the SAD model. This might make some of the existing features redundant, leading to efficiency gains. At the same time, data from GP can also be used to implement heuristics in the SAD model, acting as an initial filter for obvious outcomes that further reduces SADs computational burden. Reciprocally, the analysis undertaken by SAD can infleunce the weightings given to GP stamps and provide GR-level insights into GP effectiveness.</p><p>SAD could become an auditing system for a GR that is protected using GP. GP, tuned using SAD insights, is used as the primary Sybil defense and SAD retrospectively analyses the round to determine how well GP performed and makes recommendations for tuning in advance of the next round.</p><p>Other differences between SAD and GP include the degree of “forkability”. Being quite computationally heavy and requiring Gitcoin grant data, SAD is not very composable in that it would be difficult for individual projects to run it for themselves with their own parameter tuning. GP, on the other hand,is very forkable. Individual grant owners can interact using a simple API and tune the anti-Sybil defenses of an individual project to their own individual needs. SAD can inform projects of their recommendations but cannot really adapt to individual projects needs. It could be that in a Grants 2.0 world, the value of SAD is in a) providing recommendations about individual users ‘on request’ rather than squelching suspected Sybils from the round unilaterally, and b) providing insights that tune the GP trust scores, possible providing recommended ‘default settings’ for GP.</p><p>One of the potential metrics that can arise from a combination of GP and SAD is a cost per fraudulent account. The SAD model can provide a list of accounts tagged as Sybils, while all users’ passports contain stamps relating them to specific real world actions or assets that have a monetary cost associated with them (this is easy for, for example NFTs or subscriptions that have a queryable purchase price, less easy for POAPs that represent some time spent or some achievement). This means that by calculating the “value” of the population of Sybil and non-Sybil accounts and creating a simple model, it is possible to attach a minimum cost required for a Sybil to look like a real user. This would eb a useful metric because the whole idea is to maximize the cost-of-participation for Sybils while minimizing it for real users. The cost-per-forgery could be used as a foundational metric for assessing the overall anti-Sybil processes across Gitcoin. The top-level goal could then be to make the cost-of-forgery increase round on round.</p><p>It would also be possible to extend the idea of cost-of-forgery to monetary staking. Instead of generating a passport value using proof of past actions a user could fast track their way to an equivalent passport-value by staking $GTC in a deposit contract. By directly staking the $GTC they fast-track their way to a non-Sybil status in the GR. This works only if the threshold monetary value of the passport is greater than a rational Sybil attacker would be able to lock up in a deposit contract. By staking $GTC a user can buy proof of personhood using an amount of money slightly greater than a rational Sybil would be willing to spend. By generating the equivalent passport value through stamps, you buy proof-of-personhood with time, effort or expertise instead of $GTC. It is almost analogous to being able choose to participate using proof-of-work or proof-of-stake depending on the resources you have available - be provably anti-Sybil by showing you did something of value (using passport stamps) or be provably anti-Sybil by showing you staked something of value using $GTC.</p><p>Sybil defence often focuses in on identifying and “squelching” individual users, but a lot of Sybil behaviour is explicitly or tacitly encouraged by projects themselves, for example by incentivizing Sybils with the promise of future airdrops. One way to manage this is to “tax” a project for their Sybil engagement. SAD makes it possible for Sybil’s to be identified retrospectively, so the number of Sybil accounts that donate to a specific project can be known shortly after the grant round ends. GP also makes it possible to attach a minimum cost to a Sybil. These metrics could be used to tax a project proportionally to their Sybil/nonSybil ratio, with the money diverted to other projects in the next GR, or this could be a continuous stream where projects are taxed in near real-time and the money flows back into the matching pool, incentivizing projects to actively discourage their communities from using Sybil tactics. This also pushes some responsibility for Sybil defense away from Gitcoin and into communities.</p><p>Both GP and SAD aim to minimize the influence of Sybil’s on a GR. They take different approaches to achieving this, with SAD focusing on retroactive identification and removal of Sybils and GP providing preventative measures in the form of a user passport that contains evidence of personhood. Both these approaches are highly valuable in their own right but Gitcoin can level up its Sybil defense by combining them in creative ways. As we approach Grants 2.0 this is important preparatory work that lays the foundations for composable, independent Sybil resistance mechanisms that can be deployed and customized by individual grant owners.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Ropsten merge-testing with GethStar]]></title>
            <link>https://paragraph.com/@jmcook/ropsten-merge-testing-with-gethstar</link>
            <guid>AXKzm9y2DsctWqtpyWgi</guid>
            <pubDate>Mon, 30 May 2022 11:19:57 GMT</pubDate>
            <description><![CDATA[The Ropsten merge is a significant milestone in Ethereum’s progress towards moving to proof-of-stake. It is the first pre-existing public testnet to be merged, making it an important test-case for merging Mainnet. It has already been entertaining because a naughty miner deployed a lot of hashpower to the network and brought the merge date suddenly much closer - so close that the merge data passed before a Ropsten Beacon Chain even existed. The client teams quickly posted fixes that pushed the...]]></description>
            <content:encoded><![CDATA[<p>The Ropsten merge is a significant milestone in Ethereum’s progress towards moving to proof-of-stake. It is the first pre-existing public testnet to be merged, making it an important test-case for merging Mainnet. It has already been entertaining because a naughty miner deployed a lot of hashpower to the network and brought the merge date suddenly much closer - so close that the merge data passed before a Ropsten Beacon Chain even existed. The client teams quickly posted fixes that pushed the trigger for the merge (TTD - <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/glossary/#terminal-total-difficulty">terminal total difficulty</a>) into the far future. Nodes that were sync’d to Ropsten halted because there was no Beacon Chain yet to take over the consensus and block gossip responsibility, but those functions were switched off in Geth.</p><p>I am writing this article on the day that the Ropsten Beacon Chain is supposed to go live and the client fixes that bump the TTD have already been released. The Ropsten merge itself will not happen for a few days so there is still time to participate by syncing a node and spinning up a validator.</p><p><strong>update</strong> Ethereum Foundation notes on the Ropsten merge now available <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://notes.ethereum.org/@timbeiko/ropsten-ttd-blog">here</a>.</p><p>It is important to make responsible choices for both the execution and consensus client. The correct choice will differ person by person, but using minority clients is strongly encouraged. I chose to use Lodestar for my consensus client in this tutorial as it is currently one of the lesser-used clients, being newer than some of the others. Increasing the adoption of minority clients is important for evening out the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mirror.xyz/jmcook.eth/S7ONEka_0RgtKTZ3-dakPmAHQNPvuj15nh0YGKPFriA">client diversity</a>, which has security benefits for the network as a whole. I am, however, still using <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://geth.ethereum.org">Geth</a> as my execution client despite it being the majority client on the execution layer simply because I know Geth quite well and already have a Geth node sync’d to Ropsten. Readers are encouraged to experiment with minority execution clients.</p><p>This article will show how to do the following using the Geth-Lodestar combo known as Geth-Star:</p><ol><li><p>Install and run the execution client, Geth.</p></li><li><p>Install and run the consensus client, Lodestar</p></li><li><p>Deposit 32 ETH into the deposit contract and generate validator keys</p></li><li><p>Spin up a a validator using Lodestar</p></li><li><p>Attach a console to Geth and make some transactions</p></li></ol><p>The instructions that follow are for a Linux OS - I am using Ubuntu 20.04. The instructions should be very similar for other distributions/OS’s.</p><h2 id="h-1-execution-client-geth" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">1) Execution Client: Geth</h2><p>Download the latest version of Geth from Github using the command line. Switch over to the <code>unstable</code> branch and build Geth. Switch into the project root directory, <code>go-ethereum</code>:</p><pre data-type="codeBlock" text="cd clone http://github.com/go-ethereum.git
git checkout -b unstable
make
cd go-ethereum
"><code><span class="hljs-built_in">cd</span> <span class="hljs-built_in">clone</span> http://github.com/go-ethereum.git
git checkout -b unstable
make
<span class="hljs-built_in">cd</span> go-ethereum
</code></pre><p>Now Geth can be started. The configuration is defined using a series of flags passed to Geth at startup. This will include defining the <code>datadir</code> where all the blockchain data will be saved (we will start a new directory called <code>ropstendata</code> and enabling the various communication channels that allow the consensus and execution clients to connect. The following command will start Geth, printing logs to the terminal.</p><pre data-type="codeBlock" text="./geth --ropsten --datadir ropstendata --authrpc.addr localhost --authrpc.port 8551 --http --authrpc.vhosts localhost authrpc.jwtsecret ropstendata/jwtsecret --http.api eth,net --override.terminaltotaldifficulty 50000000000000000
"><code>./geth <span class="hljs-operator">-</span><span class="hljs-operator">-</span>ropsten <span class="hljs-operator">-</span><span class="hljs-operator">-</span>datadir ropstendata <span class="hljs-operator">-</span><span class="hljs-operator">-</span>authrpc.addr localhost <span class="hljs-operator">-</span><span class="hljs-operator">-</span>authrpc.port <span class="hljs-number">8551</span> <span class="hljs-operator">-</span><span class="hljs-operator">-</span>http <span class="hljs-operator">-</span><span class="hljs-operator">-</span>authrpc.vhosts localhost authrpc.jwtsecret ropstendata<span class="hljs-operator">/</span>jwtsecret <span class="hljs-operator">-</span><span class="hljs-operator">-</span>http.api eth,net <span class="hljs-operator">-</span><span class="hljs-operator">-</span><span class="hljs-keyword">override</span>.terminaltotaldifficulty <span class="hljs-number">50000000000000000</span>
</code></pre><p>This terminal should then be left running. The client will start by syncing - this means downloading the historical blockchain data and verifying it - this may take a few days as Ropsten has been around for a while and has a substantial history. If the terminal is shut down then Geth will stop and will not continue syncing until it is restarted with the command above.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0263165d9db8a5a5f6d08ca77d0bcbad5bcb0ab46e67865a1d9ae052cb42c0cc.png" alt="Geth syncing to Ropsten" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Geth syncing to Ropsten</figcaption></figure><h2 id="h-lodestar-consensus-client" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Lodestar Consensus Client</h2><p>While Geth is syncing, the consensus client can be set up. Download Lodestar from the Chainsafe Github. Once downloaded, the client can be built using <code>yarn</code>. The following commands install and build Lodestar:</p><pre data-type="codeBlock" text="git clone https://github.com/chainsafe/lodestar.git
yarn install --ignore-optional
yarn run build
"><code>git clone https:<span class="hljs-comment">//github.com/chainsafe/lodestar.git</span>
yarn install <span class="hljs-operator">-</span><span class="hljs-operator">-</span>ignore<span class="hljs-operator">-</span>optional
yarn run build
</code></pre><p>Once built, lodestar is ready to be started up. At this point, we want to start a Beacon node, not a validator. this is configured by passing <code>beacon</code> or <code>bn</code> to Lodestar on startup. It is also necessary to define the port to connect to the execution client and the location of an authentication token (jwt-secret) in the Geth data directory. These allow Lodestar to connect to Geth. In this example, the secret was saved to the same <code>datadir</code> as the blockchain data. The following command will start Lodestar, connect to the running Geth instance and start syncing the Ropsten Beacon Chain (once the Ropsten BC exists). The terminal must be left running in order for the sync to continue,</p><pre data-type="codeBlock" text="./lodestar beacon --network ropsten --eth1.providerUrls http://localhost:8551 http://localhost:8545 --jwt-secret /home/go-ethereum/ropstendata/jwtsecret terminal-total-difficulty-override &quot;50000000000000000&quot;
"><code>./lodestar beacon <span class="hljs-operator">-</span><span class="hljs-operator">-</span>network ropsten <span class="hljs-operator">-</span><span class="hljs-operator">-</span>eth1.providerUrls http:<span class="hljs-comment">//localhost:8551 http://localhost:8545 --jwt-secret /home/go-ethereum/ropstendata/jwtsecret terminal-total-difficulty-override "50000000000000000"</span>
</code></pre><h2 id="h-adding-a-validator" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Adding a validator</h2><p>It is now possible to run a validator connected to the consensus client. This requires at least 32 Ropsten ETH. <strong><em>Running a validator is optional</em></strong> - it is still useful to have the two clients running as described above as this gives private, permissionless, censorhip-resistant and fast access to the Etheruem blockchain, with Geth processing transactions and Lodestar tracking the head of the chain. However running a validator means participating fully in the security of the network.</p><p><strong><em>I do not know of a functional Ropsten faucet that pays out anywhere close to 32ETH so unfortunately it is a case of begging and borrowing from Ropsten whales at the moment.</em></strong></p><p>The easiest way to start a validator is to visit the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ropsten.launchpad.ethereum.org/en/">Ropsten Launchpad</a>. This page walks the reader through depositing their ether to the deposit contract and generating validator keys. Please read the various warnings and take the validator commitment statement seriously - this is a testnet but it is great practice for spinning up a Mainnet validator later.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/011a2788e38c3255cc3ff8f7cc5e3a8a96f1f76e57f2eb3ef130abc07953ff5c.png" alt="Choose the Download Key Gen GUI app option" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Choose the Download Key Gen GUI app option</figcaption></figure><p>Using the Key Gen GUI app is a very convenient way to generate the validator keys. Simply follow the instructions provided in the GUI. For this tutorial, the keys should be saved to <code>ropstendata/keystore</code>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a513db4ec19b79df91b4af0a1ee564dd705d1dde47924c751927e37630110860.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The launchpad will also give instructions for sending 32 ETH to the deposit contract. Follow the instructions. A successful deposit looks like this:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e5cdd22dadc912a77707e335849a87e5539a8a71cc0fb4f683b739a7049cd4fc.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Once the keys are saved and the deposit made, a validator can be started in a third terminal. The following command imports the new validator keys into Lodestar:</p><pre data-type="codeBlock" text="./lodestar account validator import --network ropsten --directory ropstendata/keystore
"><code>./lodestar account validator <span class="hljs-keyword">import</span> <span class="hljs-operator">-</span><span class="hljs-operator">-</span><span class="hljs-title">network</span> <span class="hljs-title">ropsten</span> <span class="hljs-operator">-</span><span class="hljs-operator">-</span><span class="hljs-title">directory</span> <span class="hljs-title">ropstendata</span><span class="hljs-operator">/</span><span class="hljs-title">keystore</span>
</code></pre><p>A successful import will request a password. Keep this safe! A successful validator import looks like this:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7a22e0d9eb4cfc3ce0df104664648b9ef396f0b515d350d9c34a5edfb5b96c2e.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The validator can then be started using the following command:</p><pre data-type="codeBlock" text="./lodestar validator --network ropsten --terminal-total-difficuly-override &quot;50000000000000000&quot;
"><code>./lodestar validator <span class="hljs-operator">-</span><span class="hljs-operator">-</span>network ropsten <span class="hljs-operator">-</span><span class="hljs-operator">-</span>terminal<span class="hljs-operator">-</span>total<span class="hljs-operator">-</span>difficuly<span class="hljs-operator">-</span><span class="hljs-keyword">override</span> <span class="hljs-string">"50000000000000000"</span>
</code></pre><p>There is a queue for validator activation, so there will be a period of a few days before the validator does anything interesting. Once it activates it will start logging something like this:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/de191c7e54250abc9be90f18bc499a9449ed777b22645eed172e152b245d42ed.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><h2 id="h-tests" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Tests</h2><p>At this point, three terminals are running concurrently - Geth (execution client), Beacon node and validator (or just Geth and Beacon node if no validator was added). This setup is a private portal to Ethereum. Geth provides the tooling for interacting with the Ethereum network. Commands are sent to Geth in the form of JSON objects that conform to the JSON-RPC-API spec. This can be done directly by sending curl requests to Geth’s exposed http port. However, this is tedious and quite error-prone, so it is more common to use a convenience library such as Web3.js. Geth provides a Javascript console that exposes most of the Web3.js API. The console can be started by opening a new terminal and connecting using IPC:</p><pre data-type="codeBlock" text="./geth attach ropstendata/geth.ipc
"><code>./geth attach ropstendata<span class="hljs-operator">/</span>geth.ipc
</code></pre><p>The Geth Javascript console will open. Check the accounts that exist in the keystore using:</p><pre data-type="codeBlock" text="eth.accounts
"><code>eth.accounts
</code></pre><p>Make a simple transaction using:</p><pre data-type="codeBlock" text="personal.sendTransaction({from: eth.accounts[0], to: eth.accounts[1], value: 20000000000000), &quot;yourpassword&quot;);
"><code>personal.sendTransaction({<span class="hljs-keyword">from</span>: eth.accounts[<span class="hljs-number">0</span>], to: eth.accounts[<span class="hljs-number">1</span>], <span class="hljs-built_in">value</span>: <span class="hljs-number">20000000000000</span>), <span class="hljs-string">"yourpassword"</span>);
</code></pre><p>Also try some more complex interactions such as deploying and interacting with smart contracts. Much more information about using the Geth console is available at the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://geth.ethereum.org">Geth docs</a>.</p><h2 id="h-the-pandas-of-justice" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Pandas of Justice</h2><p>Watch the terminals around the time of the merge (expected to be 8-9 June). A successful transition to proof-of-stake will be indicated by the merge Pandas appearing on screen!</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5d57af6a9ca08a090d1b84e31f050149365ea40cd5e4a62a4e380faa38274fb2.jpg" alt="Success Pandas" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Success Pandas</figcaption></figure><pre data-type="codeBlock" text="
"><code></code></pre>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Fighting Simple Sybils: Levenshtein Distance]]></title>
            <link>https://paragraph.com/@jmcook/fighting-simple-sybils-levenshtein-distance</link>
            <guid>5LQSSKzM7Byy3UMIiSzt</guid>
            <pubDate>Thu, 19 May 2022 13:07:02 GMT</pubDate>
            <description><![CDATA[SybilsQuadratic funding - the mechanism that currently determines the value of Gitcoin grant funding -is inherently vulnerable to Sybil attacks. Sybil attacks are individual humans dividing themselves into multiple “virtual humans” in order to gain additional voting weight. In traditional banking and voting systems, Sybil resistance comes from “KYC” (know-your-customer) which links personal identifying information to some action. In Web3, “KYC” is generaly minimized because it undermines the ...]]></description>
            <content:encoded><![CDATA[<h2 id="h-sybils" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Sybils</h2><p>Quadratic funding - the mechanism that currently determines the value of Gitcoin grant funding -is inherently vulnerable to Sybil attacks. Sybil attacks are individual humans dividing themselves into multiple “virtual humans” in order to gain additional voting weight. In traditional banking and voting systems, Sybil resistance comes from “KYC” (know-your-customer) which links personal identifying information to some action. In Web3, “KYC” is generaly minimized because it undermines the core ethos of censorship resistance and permissionlessness. This means other methods are required to identify which participants in a grant round are real individual humans, and which are not.</p><h2 id="h-sybil-strategies" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Sybil Strategies</h2><p>The goal of Sybil defense is to increase the investment of time and money required for an attacker to convice a grant review system that they are &gt; 1 person to the extent that as rational attacker would not do it. Defenders constantly attempt to push this cost up while minimizing their own expenses, while attackers constantly try to pull the attack cost down. The greater the size of the exploitable pool of funds, the higher cost an attacker will be willing to pay. At the same time, extremely low-cost Sybil attacks are often worthwhile for attackers because even a low success rate can still be profitable if the attack cost is sufficiently low. This means that a robust Sybil defense structure requires systems that identify cheap, simple attacks very effectively and efficiently as well as more complex defenses against sophisticated attacks.</p><h2 id="h-simple-sybils" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Simple Sybils</h2><p>The simplest, cheapest form of Sybil attack is simply to generate a large number of addresses and try to vote with all of them. Ordinarily, these will be sifted out of the grant review system by human reviewers because they usually fail even basic <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://proofofpersonhood.com/">proof-of-personhood</a> checks. However, there is a substantial cost associated with these human reviews. To optimize the Sybil defense mechanism for high efficacy and low cost, detection of these cheap Sybil attacks must be automated using computationally inexpensive algorithms.</p><h2 id="h-levenshtein-distance" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Levenshtein Distance</h2><p>Levenshtein distance is a concept from information theory that can measure the difference between two text strings. It is commonly used in natural language processing. The Levenshtein distance between string <code>a</code> and string <code>b</code> is the minimum number of edits required to transform <code>a</code> into <code>b</code>. The fewer changes required to change one string to the other, the smaller the Levenshtein distance. The algorithm, in its naive recursive form, is as follows:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b2e26afa40f3a1dac036b2e0223329c8b68a8f2c65c811e305a2b327708979dc.png" alt="
where a and b are the strings to be compared, |a| and |b| are the lengths of the strings" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">where a and b are the strings to be compared, |a| and |b| are the lengths of the strings</figcaption></figure><p>In the context of Gitcoin grants, this idea can be applied to account names that are suspiciously similar. Very similar names might indicate low effort Sybil attempts, for example following account names stand out as very similar and worth investigating to ensure they are genuine individual voters:</p><pre data-type="codeBlock" text="david1103920
david1103921
david1103922
david1103923
david1103924
david1103925
david1103926
"><code></code></pre><p>Accounts separated by a single edit might not be quite as obvious as those above. Here are some more examples of some slightly more subtle examples of accounts separated by a single edit:</p><pre data-type="codeBlock" text="ahmeddle
bhmeddle
chmeddle

hombre
h0mbre
hombr3

j1lly
j2lly
j3lly
"><code></code></pre><p>All of these accounts are easily identified with the constraint <code>Levenshtein distance &lt;= 1</code>.</p><h2 id="h-algorithm-performance" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Algorithm performance</h2><p>The Levenshtein algorithm is computionally expensive in its naive form because it requires recursion over all possible pairs of accounts. However, it can be made performant using <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://turnerj.com/blog/levenshtein-distance-part-2-gotta-go-fast">matrix algebra</a>. This approach was recently taken by the FDD Sybil defense squad using a list of ~298,000 registered Gitcoin accounts (meaning the Levenshtein distance algorithm was applied to a matrix of 298,00002 elements). The result was a file with 129,297,647 rows indicating all pairs with edit distances less than 4. As the distance threshold decreases, the number of accounts flagged also decreases, but the likelihood that the flagged accounts are Sybils increases. The threshold distance therefore acts as a calibration slider that the Sybil defense squad can use to balance Sybil detection rate against cost of computation.</p><h2 id="h-use-cases" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Use cases</h2><p>The technique has been applied to the full dataset of all Gitcoin accounts as a way to quantify the bulk number of suspicious accounts, but it could also run as a service that runs periodically to flag accounts that get created in some time window. This could become integrated as a gadget in the grant application workflow.</p><p>To demonstrate that the Levenshtein distance really is identifying Sybil-like behaviour, some of the accounts identified by the algorithm can be examined manually. These three accounts were flagged by the Levenshtein algorithm as potential Sybils:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d8edc4a9484847ccf77c87cbd918f1ccfd15028f70d7a9d06b0d06f3c58ec5a1.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/379b0c81fb85f4a7e3d1b5eb536027773d75d4d7bc84a8f2acdb94c2d9b01d1b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5ff7ad6d18693e739243041963ca89676e6739e0a1f1712be851c0e7f1c7167d.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Examine these accounts in more detail using links: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gitcoin.co/martin1156010">Account 1</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gitcoin.co/martin1156011">Account 2</a>, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gitcoin.co/martin1156012">Account 3</a>.</p><p>The account names are separated by a single edit (Levenshtein distance ==1). All three accounts have donated a total of precisely $37, divided identically between the same eight projects in the same round. None of the accounts have participated in bounties, hackathons, tips or kudos. All three were created at about the same time and have no activity prior to, or following, those eight votes. These seem to be obvious Sybils that were correctly flagged by the Levenshtein algorithm. Furthermore, the three accounts above were just three of eight accounts that were all separated by a single edit, and all shared the same Gitcoin activity profile and voting record.</p><h2 id="h-summary" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Summary</h2><p>Creating multiple accounts with very similar names is a very lazy attack, and probably easily identified humans in the grant review system. However, identifying them by implementing a Levenshtein distance algorithm to the entire set of accounts in a grant round removes some of the load on the human reviewers and acts as a cost-optimization because the algorithmic apporoach is fast and cheap.</p><h2 id="h-further-reading" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Further Reading</h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://medium.com/@ethannam/understanding-the-levenshtein-distance-equation-for-beginners-c4285a5604f0">Levenshtein distance for beginners</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://planetcalc.com/1721/">Online Lev calculator</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://towardsdatascience.com/text-similarity-w-levenshtein-distance-in-python-2f7478986e75">Levenshtein distance in Python</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://people.cs.pitt.edu/~kirk/cs1501/Pruhs/Spring2006/assignments/editdistance/Levenshtein%20Distance.htm">Levenshtein distance in 3 languages</a></p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[FDD: Gitcoin DAO's Trust Function]]></title>
            <link>https://paragraph.com/@jmcook/fdd-gitcoin-dao-s-trust-function</link>
            <guid>4mNEnv1Qama6iZ4GmCIF</guid>
            <pubDate>Tue, 17 May 2022 08:40:09 GMT</pubDate>
            <description><![CDATA[Across 13 rounds, Gitcoin has given almost $60 million to public goods. Projects that demonstrably create positive externalities bid for portions of the total funding pool. Like any substantial pool of money, Gitcoin grants attract diverse attacks from bad actors aiming to divert a portion of that money away from public goods and into their own wallets. The role of Gitcoin&apos;s Fraud Detection and Defense (FDD) squad is to protect the Gitcoin community - a diverse group that includes users,...]]></description>
            <content:encoded><![CDATA[<p>Across 13 rounds, Gitcoin has given almost <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gitcoin.co/results">$60 million</a> to public goods. Projects that demonstrably create positive externalities bid for portions of the total funding pool. Like any substantial pool of money, Gitcoin grants attract diverse attacks from bad actors aiming to divert a portion of that money away from public goods and into their own wallets. The role of Gitcoin&apos;s Fraud Detection and Defense (FDD) squad is to protect the Gitcoin community - a diverse group that includes users, $GTC holders, grant recipients, donors and stakeholders in funded projects - from these attacks. From FDD emerges a protective layer that filters out attackers, enables partnerships with people and projects that have genuinely good intentions and delivers a trustworthy set of grant decisions. In doing so, FDD minimizes financial spillage to dishonesty and incompetence are thereby maximizes the public goods that can be supported by a given pool of funds. This article explores the various components of FDD and explains how they operate together to form a community &quot;trust function&quot; that protects public goods.</p><h2 id="h-trust-in-fdd" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Trust in FDD</h2><p>FDD aims to deliver grant decisions can be trusted by the community.</p><p>In this context, trust can be defined as belief that the grant evaluation system is effective at eliminating dishonesty and wastefulness. To foster this belief the community must perceive the process to be transparent and well-aligned with its values. This requires an open system that ensures grant applicants, grant reviewers and voters act honestly. Trust can be thought of as the synthesis of five core concepts:</p><p><strong>T</strong> Transparency <strong>R</strong> Relationships <strong>U</strong> Understanding <strong>S</strong> Shared Success <strong>T</strong> Truth-telling</p><p>Maximizing these core values in the FDD protective layer enhances the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vitalik.ca/general/2021/03/23/legitimacy.html">legitimacy</a> of grant decisions. This requires FDD to defend against attacks by dishonest grant applicants, voters and reviewers while simultaneously implementing an introspective process of self-refinement.</p><p>There are many ways a grant-hacker can try to game the system. A single user might submit many proposals without necessarily intending to deliver on them, relying on pseudonimity to avoid reputational damage. A user might also divide their donations to specific grants across many wallets, gaming the funding mechanism by dividing themselves into multiple virtual humans, hoping that the system gives each of their avatars a real person&apos;s voting weight. This is known as a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Sybil_attack">&quot;sybil attack&quot;</a> - effectively one person pretending to be many. There are several other attack vectors such as providing false or ambiguous information that make fraudulent grants appear to meet the eligibility requirements - this is not always easy for a grant reviewer to detect. The grant reviewers themselves and the tools they use to evaluate grants must also be prevented from becoming dishonest.</p><p>FDD is effectively a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/GrYD_gTHgys">function</a> that takes grant applications as inputs and outputs judgments. It&apos;s actually a nonlinear composite function with many input features and interoperable modules. The FDD function is optimized against health metrics and feedback from the community, we hope to adaptively converge towards true legitimacy.</p><h2 id="h-the-fdd-function" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The FDD Function</h2><p>The FDD function can be divided into three broad components (shown in three colours in Figure 1):</p><ul><li><p>GIA (Grants Intelligence Agency)</p></li><li><p>Sybil Defenders</p></li><li><p>Evolution</p></li></ul><p>At a high level, the GIA sets and enforces the criteria of grant eligibility. The community votes on how much funding each grant should receive from the matching pool. The current mechanism for choosing the amount awarded to each grant is Quadratic Funding, however this is subject to change with the development of Grants 2.0 (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gov.gitcoin.co/t/gitcoin-grants-2-0/9981">https://gov.gitcoin.co/t/gitcoin-grants-2-0/9981</a>) and the mechanism design initiative, Catalyst. The Sybil defenders identify voters that are dishonestly gaming the voting system and removes them. Then Evolution examines the overall grant-giving metrics and makes improvements for the next round. This is an accountability system that ensures the FDD squad itself is honest, transparent and efficient. Each of these top level components is composed of several sub-components that will be explained in the next sections.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/411e0d852368ec9dc8204a927c441404fd83d583d3397f148a68acc59ff41ee3.png" alt="Figure 1: Flow diagram showing the flow of information through FDD, from grant application to grant decision and back!" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Figure 1: Flow diagram showing the flow of information through FDD, from grant application to grant decision and back!</figcaption></figure><h2 id="h-grant-intelligence-agency-gia" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Grant Intelligence Agency (GIA)</h2><p>The task of GIA is to assess the validity of each grant. This breaks down into several distinct tasks:</p><ol><li><p>Ensure grants come from legitimate applicants.</p></li><li><p>Ensure grants meet basic eligibility requirements.</p></li><li><p>Ensure grants meet a threshold of quality and are aligned with Gitcoin community values.</p></li><li><p>Ensure the grant review is conducted fairly;</p></li><li><p>Manage appeals and investigations.</p></li></ol><p>Each of these tasks has a team dedicated to it.</p><p>At the moment grant applications are reviewed by a relatively small set of highly trusted individuals who have built up their working knowledge and mutual trust within the Gitcoin DAO over several grant rounds. This works quite well but having grant outcomes decided by a small pool of trusted individuals runs contrary to the base ethos of decentralization, permissionlessness and transparency that define the DAO. To put the decentralization into context, whereas all reviews were done by the Gitcoin core team up to the 10th Gitcoin grants round, subsequent rounds have distributed the evaluations across more individuals managed by the DAO. To decentralize further, the grant reviewing process needs to continue to expand to include more human reviewers; however, opening up the review process also creates vulnerabilities that can be exploited intentionally by thieves and saboteurs and unintentionally by less competent participants.</p><p>Some system must be put in place that incentivizes honest and diligent behaviour while also widening participation across the community. This is the domain of the Grant Intelligence Agency Rewards Team. Their remit is to design and implement a reviewer-incentivization scheme that attracts, trains and retains trustworthy reviewers. For each grant, the system must maximize the trustworthiness of the decision while minimizing the cost per review. Correctly incentivizing reviewers is a route to increasing the trustworthiness of the humans reviewing grants - one critical part of the overall grant review process that also includes Sybil defense.</p><p>Incentivization can take many forms; it includes both financial and non-financial rewards. Optimizing the incentivization model means finding the right reward criteria, as well as the tempo, value and method of payment with the aim of maximizing trust and minimizing cost. The right model widens participation so that the need to trust a small group of known reviewers diminishes and also maximises learning opportunities, communication efficacy and reviewer retention as positive externalities.</p><p>Several potential models have been proposed during GR13. The simplest is a heirarchical model where lower-trust reviewers complete an intitial evaluation which is then passed to a higher-trust reviewer to confirm. Alternatives include a pooled system where the final decision comes from a majority vote across multiple reviewers. The leading concept at the moment seems to be a random assignment model that assigns grants to pools of reviewers that can come from multiple &quot;trust levels&quot;. A more in depth exploration of these conceptual models is available <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/4GJZWaMoSIG3T7HHm-Ch_Q">here</a>. In GR14 the aim is to move from conceptual models to numerical simulations. At the same time, there are plans to incorporate natural language processing into the review process so that a degree of intelligent automation can support the humans in the system.</p><p>Once grants have been determined to be eligible, high quality and value-aligned they can progress to funding. Funds are allocated using a quadratic voting mechanism where the number of donations made to a particular grant influences the total amount awarded much more than the monetary value of each donation because each donation is accompanied by a donation from a central &quot;matching pool&quot;. Smaller contributions get larger contributions from the matching pool, up to a cap. This means grants are awarded larger amounts if many people contribute compared to few people contributing large amounts. This has an inherent vulnerability to Sybil attackers, since dividing a single donation into many smaller ones amplifies the contribution from the matching pool. Defending against these Sybil attacks is predominantly the domain of the Sybil Defenders, but there is also a team within GIA that addresses this specific issue: the Trust Bonus team.</p><p>Trust Bonus is a system of incentivizing Gitcoin users with extra influence in the matching pool in exchange for proof that they are real individual humans. Valid evidence of personhood, including Proof of Humanity, BrightID, Idena, SMS, ENS, POAP, Twitter facebook and Google accounts, enable a person to boost their weighting in the matching pool.</p><p>Grant applicants are also able to appeal against decisions after-the-fact. This is managed by the Policy squad, who are also responsible for creating and maintaining the various policies that govern the use of the Gitcoin platform and participation in grants rounds. Historically, the appeals process has involved an initial review by the FDD squad before being made available to a team of Stewards for comments. Support from at least 5 Stewards leads to a successful appeal and update to the grants policy if ratified by GTC holders (Figure 2). However, during GR13 there was extensive <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gov.gitcoin.co/t/proposal-grants-policy-update-projects-with-tokens/10125/24?u=disruptionjoe">discussion on the Gitcoin Governance Forum</a> about a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gov.gitcoin.co/t/gr13-governance-brief/10289#grants-disputes-appeals-9">particular grant</a> that will most likely lead to refinements to the appeals process in GR14, especially around widening access to appeals decisions to other DAO workstreams and the broader community.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c2f3990201c65451039ab0cae5867e66861353043a488ef3f433cd04c8948267.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The whole grant review process is overseen by a trusted group, &quot;Round Management&quot;, who ensure that things progress at the right tempo and meet quality benchmarks. They act as an oversight panel that maintain alignment with the Gitcoin ethos and amplify the community voice in decision making. To do so, they connect with all the various subgroups within GIA and externally.</p><p>Recently, GIA has started to incorporate <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.ethelo.org/ethelo-first-on-digital-democracy-ranking-2021">Ethelo</a> as a grant management tool. This stadardizes the grant management and offers enhanced metricization so the community can more easily digest details about every grant round, supporting transparency and decision-decentralization.</p><p>In summary, taking applications as inputs, the grant intelligence agency returns vetted, eligible grants that can Gitcoin DAO can fund confidently. The next challenge for FDD is to ensure the voting procedure that determines the allocated amount proceeds fairly and transparently. As explained above the grants intelligence agency contributes to that using the trust bonus scheme, but primarily responsibility is handed over to the Sybil Defenders.</p><h2 id="h-sybil-defense" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Sybil Defense</h2><p>Arguably the Gitcoin grant system&apos;s greatest vulnerability is dishonest gaming of the votes because sybil vulnerability is inherent to the quadratic voting model. Because the system attempts to assign funds on the basis of how many humans support a grant it rather the capital deployed in support of it, dividing a donation into many small &quot;votes&quot; is a cheap and effective way to boost a particular grant&apos;s grant allocation. This is a form of sybil attack, since a single human has divided themselves into multiple virtual humans to increase their voting efficacy. This happens at an alarming scale - of all the Gitcoin users making donations to the most recent grants round, about <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gov.gitcoin.co/t/gr13-governance-brief/10289/1">12% were estimated as sybil</a> attackers, although bot detection is notoriously difficult and there is probably large uncertainty. These sybil attackers also evolve new, more complex strategies in every round, making them harder to detect. The Sybil defenders are the blue team in a deeply adversarial arms race. The role of the Sybil Defense team is to amplify honest voices and suppress adversarial ones.</p><p>The primary tool used to detect sybil accounts is a semi-supervised reinforcement machine learning algorithm. The purpose of the algorithm is to detect sybil-like behaviours in votes for specific grant applications. To do this, there is a well defined pipeline conecting data about the grant applicants to a sybil/non-sybil judgment. The data sources are the grant application and information scraped from the applicant&apos;s Github profile. From these sources, a set of characteristics are extracted. These characteristics are those assumed to be predictive for Sybil behaviours, and they are used as model inputs (&quot;features&quot;). The model itself is a &quot;random forest classifier&quot; which divides the data into successively more specific categories until eventually giving the dataset a label - in this case &apos;sybil&apos; or &apos;not sybil&apos;. This model (known as ASOP, Anti-Sybil Operationalized Process) was developed and operated by a contracted team (Blockscience) but it has now been handed over to the community. In parallel, the FDD squad has also developed a &quot;community model&quot; that will now run alongside ASOP, probably in an ensemble that also includes human evaluators. Gitcoin DAO FDD-stream contributors now run both models end-to-end, boosting its decentralization.</p><p>The models used by Sybil Defense are &quot;human-in-the-loop&quot; algorithms which means that the sybil detection is not entirely automated, but requires human input at several points in the pipeline. There are many reasons why this is critical. At the highest level, the model must align with the values and aims of the community, rather than rigidly following the rules written into it by its developers. This includes coming to consensus on a set of behaviours the community considers to be &quot;sybil-like&quot; that the model can seek out. At a lower level, humans (&apos;sybil defenders&apos;) are required to evaluate grants manually to provide critical &quot;ground truth&quot; data that can be used to assess and tune the model performance. The machine-learning model is used in combination with two other pieces of evidence (survey answers provided by human evaluators and a set of heuristics) to generate an &quot;aggregate&quot; score that is used as a diagnostic tool for identifying sybils. The self-similar, quasi-fractal nature of the DAO raises its head here, as human-machine interactions generate a ML model that itself is aggregated with human evaluations to generate a diagnostic tool that is analysed in an automated data-science pipeline overseen by human analysts. &quot;Human-in-the-loop&quot; is a descriptor that applies across scales from FDD subcomponents, their parents, FDD and the DAO. In the end, the model is a tool for strengthening human coordination, not replacing it. Each grant outcome is a subjective decision but that decision is more trustworthy when backed by increasingly robust data analysis tools such as ASOP. Combining humans-in-the-loop machine learning with optimized incentives for evaluators encourages the decentralization of the system and ensures the system stays ethically aligned to the community.</p><p>Votes that the sybil defenders identify as dishonest are removed from the pool. At this point, the grants and the votes have both been through a series of semi-automated processes that have trimmed away dishonesty and malfeasance and left a core of legitimate public-goods projects that can be supported according to a an approximaton of community sentiment reached through quadratic voting. The grants can therefore be funded - this happens after ratification by a team of Gitcoin stewards and release of matching pool funds held in escrow in a multisig contract on Ethereum.</p><h2 id="h-evolution" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Evolution</h2><p>Substantial work is put in to ensuring that each grant round improves upon the last. There are many facets to this. At the core is a process of reflection on the technical and human aspects of the round. For example, fraud detection analytics are used to quantify the efficacy of Sybil defenders and GIA in removing dishonest participants from the process. The analytics answer the questions: how many Sybils were removed and how many snuck through? How many grants turned out to be fraudulent? How much did each review cost?</p><p>Evaluating the number of Sybils identified using the machine learning models compared to human evaluators provides an opportunity to tune the models for better performance. This process should lead to more accurate model outcomes in successive rounds. Combined with human evaluations, this will lead to an ever-improving attrition rate for Sybils. For example, comparison between the automated and human evaluations for GR13 indicate that the model is more lenient than humans for letting Sybils through undetected, raising about <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/@bsci-gitcoin/SkdBF87Xc">84% of the warning flags that humans do</a>. In GR13, the Matrix squad developed a classification of diagnostic Sybil behaviours and provided analysis of past Sybils that challenged current assumptions about how Sybils can be detected. This information can now be used to refine the Sybil detection process for the upcoming season. Over time this can also reduced the cost of defending the grants against Sybils by reducing the necessary person-hours invested in it. The cost per grant review should also decrease in successive rounds as the incentive and trust models are better optimized by implementing the findings from continued research and development. Between GR12 and GR13 the cost per grant was reduced by about 300% as a result of streamlining across FDD.</p><p>As well as quantitative analysis of the grants and votes, it is critical for FDD to be introspective. For example, how trustworthy were the reviewers? Did the various teams meet their objectives for the season? Are the DAO contributors happy and productive? Where were the bottlenecks? These qualitative investigations are managed by the Mandate Delivery Squad and FDD governance. This spans informal conversations and group meetings to surveys and formal feedback that collectively provide an FDD healthcheck. Issues arising are discussed transparently, leading to refinements to the FDD functioning in advance of the next round. In upcoming rounds, a new squad, &quot;xOS&quot; will be introduced to improve the user experience (UX) for contributors, in particular making the decision making processes within FDD easier to understand. Also, a dedicated team of &quot;Storytellers&quot; will create accessible materials that keep the community (both internal and external to FDD) well informed about FDDs operations, further adding to transparency and accountability.</p><p>In the end, the entire grant evaluation process is revised, refined and optimized in a continuous loop. As information flows through the FDD function from grant application to grant outcome, insights about the function itself are extracted at every stage, like a self monitoring, diagnostic system. The source council, team leads and contributors use these insights to learn about the inefficiencies and pinchpoints and minimize them for future rounds. The tools and expertise developed by FDD are also available to other external groups who can benefit from data analysis and modelling services (as managed by FDD Participatory Data Services), distributing the benefits of FDD across Gitcoin DAO.</p><h2 id="h-summary" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Summary</h2><p>FDD is a trust generating function that takes in grant applications and returns decisions that maximize the public goods that can be supported by a given pool of funds. This means defending against fraudulent grants and Sybil attackers and requires constant introspection about the health of the system. This is managed by three broad groups of subsystems, known as GIA, Sybil Defenders and Evolution. Together, these systems combine to ensure the grants funded by Gitcoin can be trusted by the community to be fair, transparent and representative of the communities priorities. FDDs introspection layer has identified several major upgrades to the system that will be implemented in current rounds, including the integration of Ethelo for review management, new reviewer incentivization models, enhanced Sybil defense schemes, improved contributor UX and the creation of a story-telling team that will further optimize the FDD function and level-up the service it offers to the Gitcoin community.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Ethereum PoS Attack and Defense]]></title>
            <link>https://paragraph.com/@jmcook/ethereum-pos-attack-and-defense</link>
            <guid>0CxOv31rEHEZmGWr2IHC</guid>
            <pubDate>Fri, 15 Apr 2022 09:45:44 GMT</pubDate>
            <description><![CDATA[Ethereum is a notoriously adversarial environment. Ethereum has even been compared to a “dark forest” - acknowledging the terrifying game-theoretic concept from the Three body Problem that being visible to other entities in the universe is an unavoidable precursor to being destroyed by them. This reputation mostly comes from weaknesses in the application layer (insecure smart contracts) or the social layer (users being manipulated to give up their private keys or unwittingly sign transactions...]]></description>
            <content:encoded><![CDATA[<p><em>Ethereum is a notoriously adversarial environment.</em> Ethereum has even been compared to a<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.paradigm.xyz/2020/08/ethereum-is-a-dark-forest"> “dark forest”</a> - acknowledging the terrifying game-theoretic concept from the<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/The_Dark_Forest"> Three body Problem</a> that being visible to other entities in the universe is an unavoidable precursor to being destroyed by them. This reputation mostly comes from weaknesses in the application layer (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://rekt.eth.link/">insecure smart contracts</a>) or the social layer (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/pt/security/#common-scams">users being manipulated</a> to give up their private keys or unwittingly sign transactions) and from the existence of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/developers/docs/mev/">bots extracting value</a> from the transaction mempool. However, sophisticated hackers acting either as thieves or saboteurs are also constantly seeking out opportunities to attack Ethereum’s client software. The<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/developers/docs/nodes-and-clients/"> client software</a> is what turns a computer into an Ethereum node - it is code that defines all the rules for connecting to other nodes, swapping information and agreeing on the state of the Ethereum blockchain. Attacks on the protocol layer are attacks on Ethereum itself.</p><p>This article gives an overview of known attack vectors on the Ethereum’s consensus layer and outlines how those attacks can be defended. Some basic knowledge of the<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/upgrades/beacon-chain/"> Beacon Chain</a> is probably required to get the most value from this article. Good introductory material is available<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/upgrades/beacon-chain/"> here</a>,<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethos.dev/beacon-chain/"> here</a> and<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://our.status.im/two-point-oh-the-beacon-chain/"> here</a>. Also, it will be helpful to have a basic understanding of the Ethereum’s<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://eth2book.info/altair/part2/incentives"> incentive layer</a> and fork-choice algorithm,<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/abs/2003.03052"> LMD-GHOST</a>. These are big topics, but I’ve included a very high level primer in the preamble below.</p><h2 id="h-preamble" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Preamble</h2><h3 id="h-the-incentive-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">The Incentive Layer</h3><p>Ethereum is a proof-of-stake blockchain that is secured using Ethereum’s native cryptocurrency, ether. Node operators that wish to participate in validating blocks and identifying the head of the chain deposit ether into a smart contract on Ethereum. They are then paid in ether to run validator software that checks the validity of new blocks received over the peer-to-peer network and apply the fork-choice algorithm to identify the head of the chain. The node operator is now a “validator”. There are two primary roles for a validator: 1) checking new blocks and “attesting” to them if they are valid, 2) proposing new blocks when selected at random from the total validator pool. If the validator fails to do either of these tasks when asked they miss out on an ether payout. There are also some actions that are very difficult to do accidentally and signify some malicious intent, such as proposing multiple blocks for the same slot or attesting to multiple blocks for the same slot. These are “slashable” behaviors that result in the validator having some amount of ether (up to 0.5 ETH) burned before the validator is removed from the network, which takes 36 days. The slashed validator’s ether slowly drains away across the exit period, but on Day 18 they receive a “correlation penalty” which is larger when more validators are slashed around the same time. Ethereum’s incentive structure therefore pays for honesty and punishes bad actors.</p><h3 id="h-fork-choice" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Fork choice</h3><p>The fork choice algorithm is run by every validator and its role is to identify the head of the blockchain. Under ideal conditions with entirely honest validators and zero network latency, the fork choice algorithm is not really necessary as there will only ever be one block at the head of the chain. However, in reality some clients receive blocks later than others creating multiple views of the head of the chain and there might be some percentage of misbehaving validators that could be proposing or voting for multiple blocks in the same slot. This means there has to be some algorithm for deterministically picking out the true head from multiple options.</p><p>To rewind slightly, the chain is also ossified at regular intervals so that its blocks can’t be replaced <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.ethereum.org/2016/05/09/on-settlement-finality/">without &gt;⅓ of the total stake being slashed</a>. This is known as “finality”. The process works by considering the first slot in each epoch to be a “checkpoint”. If a checkpoint gathers attestations (votes) from validators holding at least 2/3 of the total staked ether in the deposit contract, then it is referred to as “justified”. Once that checkpoint has another checkpoint justified on top of it, it becomes “finalized”. The fork choice algorithm then only considers blocks in the non-justified portion of the chain. The algorithm that justifies and finalizes the chain is called “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/abs/1710.09437">Casper FFG”</a>. The fork choice algorithm itself is called <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/abs/2003.03052">LMD-GHOST</a>, standing for “latest message driven greediest heaviest observed subtree” which is a jargon-heavy way of saying the correct chain is the one that has accumulated the most attestations (GHOST) and that if multiple messages are received from the same validator only the last one counts (LMD). Each validator assesses each block using this rule and adds the heaviest one to its canonical chain.</p><p>Once per epoch, the validator is required to sign an attestation. This attestation contains two critical pieces of information: an LMD vote and an FFG vote. The LMD vote is the root of the block the validator considers to the the head of the chain. The FFG vote contains the block hash and epoch for the target and source checkpoints, where the source is the most recent justified checkpoint the chain already knows about, and the target is the next checkpoint to be justified.</p><p>Ethereum’s consensus algorithm is therefore a combination of LMD-GHOST and Casper FFG which are sometimes referred to singularly as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2003.03052.pdf">Gasper</a>. With this high level background, we can move on to examine some of the potential ways this system could be attacked.</p><h2 id="h-layer-0-attacks" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Layer 0 Attacks</h2><p>First of all, individuals that are not actively participating in Ethereum (by running client software) can choose to attack the network by targeting the social layer (Layer 0). These attacks pose a risk to Ethereum despite never actually directly influencing the execution of any of Ethereum’s software. Layer 0 is the foundation upon which Ethereum is built, and as such it represents a potential surface for attacks with consequences that ripple through the rest of the stack. Some examples come to mind:</p><ol><li><p>A major misinformation campaign launched across multiple platforms and sustained for months/years could erode the trust the community has in Ethereum’s roadmap, team of developers etc. This could then decrease the number of individuals willing to participate in securing the network, degrading both decentralization and crypto-economic security.</p></li><li><p>Targeted attacks and/or intimidation directed at the developer community. This could lead to voluntary exit of developers and slow down Ethereum’s progress while sapping morale more widely.</p></li><li><p>Over-zealous regulation could also be considered to be an attack on Layer 0, since it could rapidly disincentivize participation and adoption.</p></li><li><p>Infiltration of knowledgeable but malicious actors into the developer community whose aim is to slow down progress by bike-shedding discussions, delaying key decisions, creating spam or diversionary proposals, etc.</p></li><li><p>Deliberate stoking of discontent among the Ethereum community with the aim of creating sufficient unrest to cause a permanent schism.</p></li><li><p>Bribes made to key players in the Ethereum ecosystem to influence decision making.</p></li></ol><p>What makes attacks on the social layer especially dangerous is that in many cases very little capital or technical know-how is required to launch an attack. All that is really required is time and malicious intent - hardly scarce resources. It is also interesting to think about how a Layer 0 attack could be a multiplier on a crypto-economic attack. For example, if censorship or finality reversion were achieved by a malicious majority stakeholder, undermining the social layer might make it more difficult to coordinate a community response out-of-band.</p><p>Defending against Layer 0 attacks is probably not straightforward, but some basic principles can be established. One is maintaining an overall high signal to noise ratio for public information about Ethereum, created and propagated by honest members of the community through blogs, discord servers, annotated specs, books, podcasts and Youtube.<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org"> Ethereum.org</a> is a great example of this, especially because they are rapidly translating their extensive documentation and explainer articles into many languages. Flooding a space with high quality information and memes is an effective defense against misinformation - it is the information gaps that are vulnerable. The Ethereum community is good at this, but continued commitment to creating and disseminating quality information is required for long term Layer 0 security.</p><p>Another important fortification against social layer attacks is a clear mission statement and governance protocol. Ethereum has positioned itself as the decentralization and security champion among smart-contract layer 1’s, while also highly valuing scalability and sustainability. Whatever disagreements arise in the Ethereum community, these core principles are minimally compromised. Appraising a narrative against these core principles, and examining them through successive rounds of review in the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/governance/">EIP (Ethereum Improvement Proposal)</a> process, might help the community to distinguish good from bad actors and limits the scope for malicious actors to influence the future direction of Ethereum.</p><p>Finally, it is critical that the Ethereum community remains open and welcoming to all participants. A community with gatekeepers, elitism and exclusivity is one especially vulnerable to social attack because it is easy to build “us and them” narratives. On the other hand, an open and inclusive community is one where misinformation is more effectively erased through open-minded discussion. Tribalism and toxic maximalism hurt the community and erode Layer 0 security. Ethereum generally has a very open community that welcomes new participants, but as the community scales this may become increasingly difficult to sustain. Ethereum community members with a vested interest in the security of the network should view their conduct online and in meatspace as a direct contributor to the security of Ethereum’s Layer 0 because as we will discuss later in this article, a strong social layer is the last line of defense against protocol attacks.</p><h2 id="h-the-attackers-prize" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The attacker’s prize</h2><p>Layer 0 attacks might aim to undermine public trust in Ethereum, devalue ether, reduce adoption and make Ethereum vulnerable to being usurped by another competing chain, or to weaken the Ethereum community to make out-of-band coordination more difficult. However, it is not immediately obvious what is to be gained from attacking the Ethereum network itself.</p><p>A common misconception is that a successful attack allows an attacker to generate new ether, or drain ether from arbitrary accounts. Neither of these are plausible because all transactions that get added to the blockchain are executed by all the execution clients on the network. They must satisfy basic conditions of validity (e.g. transactions are signed by sender’s private key, sender has sufficient balance, etc) or else they simply revert. There are several outcomes that an attacker might realistically aim for: reorgs, double finality or finality delay.</p><p>A “reorg” is a reshuffling of blocks at the head of the chain. In an attack this would aim to ensure certain blocks are either included or excluded even though they would not be in an honest network. This might allow an attacker to “double spend” by, for example, sending their ether to an exchange and cashing it out into fiat money, then reorganizing the Ethereum chain to remove that transaction so they end up with both the ether and its fiat equivalent. Alternatively, a reorg might allow a sophisticated attacker to extract value from other people’s transactions by front-running and back-running (MEV), or reorgs might consistently prevent someone’s or some group’s transactions from being included in the canonical chain, effectively censoring them from the Ethereum network.</p><p>The most extreme form of reorg is “finality reversion” which removes or replaces blocks that have previously been finalized. This is only possible if at least ⅓ of the total staked ether is destroyed - this guarantee is known as “economic finality” - more on this later.</p><p>Double finality is the unlikely but severe condition where two forks are able to finalize simultaneously, creating a permanent schism in the chain. This is theoretically possible for an attacker willing to risk 34% of the total staked ether. The community would be forced to coordinate off-chain and come to an agreement about which chain to follow. These kinds of social coordination defenses are explored in detail later.</p><p>A finality delay attack prevents the network from reaching the necessary conditions for Casper-FFG to finalize sections of the chain. This would be very disruptive to Ethereum’s application layer since many of the apps that run on top of Ethereum rely upon rapid finality to operate. Without having high confidence in the finality of the chain it is hard to trust financial applications built on top of it. The aim of a finality delay attack is likely simply to disrupt Ethereum - to “watch the world burn” - rather than to directly turn a profit, unless they have some strategic short positions.</p><h2 id="h-laziness-and-equivocation" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Laziness and Equivocation</h2><p>Anyone can run Ethereum’s client software, even without running a validator. People do this because it provides local copies of the blockchain that can be used to verify data very quickly and enables transactions to be submitted to Ethereum privately without going through a centralized third-party such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://infura.io/">Infura</a> or <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.quicknode.com/">Quicknode</a>. However, a node operator that does not also run a validator cannot participate in block production or validation. This means they really don’t influence the network security at all. The potential for a non-validating node operator to attack Ethereum is negligible unless they also mount an unrelated layer 0 attack.</p><p>To add a validator to a consensus client, a user is required to stake 32 ether into the deposit contract. With an active validator, a user begins to actively participate in Ethereum’s network security by proposing and attesting to new blocks. With these added responsibilities come rewards in the form of ether payouts but also new opportunities to act vindictively. The validator now has a voice they can use to influence the future contents of the blockchain - they can do so honestly and grow their stash of ether or they can try to manipulate the process to their own advantage, risking their stake. One way to mount an attack is to accumulate a greater proportion of the total stake and then use it to outvote honest validators. The greater the proportion of the stake controlled by the attacker the greater their voting power, especially at certain economic milestones that we will explore later. However, most attackers will not be able to accumulate sufficient ether to attack in this way, so instead they have to use subtle techniques to manipulate the honest majority into acting a certain way.</p><p>Fundamentally, all small-stake attacks are subtle variations on two types of validator misbehavior: under-activity (failing to attest/propose or doing so late) or over-activity (proposing/ attesting too many times in a slot). In their most vanilla forms these actions are easily handled by the fork-choice algorithm and incentive layer, but there are clever ways to game those same algorithms to an attacker’s advantage. Several such techniques have been discovered, mostly carefully coordinating the timing and propagation of their messages to control how different subsets of the total validator set view the state of the blockchain, and therefore how they behave. The next sections will describe some of the ways low-stake attackers could attack the network and how these attacks can be resisted. While these attacks are discussed in the context of small stakes, more colluding validators means more chances for the attacker to propose blocks, a wider distribution of dishonest nodes over the network topology and greater voting power to influence the fork choice algorithm, all of which give better chances of coordinating lots of validators to act in a particular way.</p><h2 id="h-attacks-by-small-stakeholders" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Attacks by small stakeholders</h2><h3 id="h-short-range-re-orgs" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Short range re-orgs</h3><p>Several papers have explained attacks that achieve reorgs or finality delay with only a small proportion of the total staked ether. These attacks generally rely upon the attacker withholding some information from other validators and then releasing it in some nuanced way and/or at some opportune moment. They usually aim to displace some honest block(s) from the canonical chain. These honest blocks have not yet been created at the time the attack starts. This is known as an <em>ex ante</em> reorg, as opposed to an <em>ex post</em> reorg in which an attacker removes an already-validated block from the canonical chain retrospectively. <em>Ex post</em> reorgs are effectively impossible on PoS Ethereum without controlling 2/3 of the staked ether (about $18 billion at current prices). With 66% of the stake the attacker can cause a tie-break between the honest and dishonest fork which may break in their favor (this is decided by the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/fork-choice.md#get_head">lexicographical order of the competing block roots</a>). With anything less than 66% of the total stake, the chance of an attacker completing an <em>ex post</em> reorg is very low - even with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/casparschwa/status/1454511865938186247?s=20&amp;t=5a5Mjzp3iDDSKq4l21-9bw">65% stake they only have &lt;0.05% chance</a> of success.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2323dc23c4240b09bc2af85f8975527788be3a231e82f0518b092d47f359f9e5.png" alt="Ex-post reorg by a ⅔ attacker. They ignore the honest block B proposed in slot N+1 and instead vote for N as the head of the chain. Then they vote for block C in slot N+2. Meanwhile, because the fork choice rule sees attestations from the previous - not current - slot, they vote for Block B in slot N+2 while the dishonest validators vote for Block C. When slot N+3 arrives both forks have 66% weight and the tie break is decided by the lexicographic order of the competing block hashes. If this breaks in favour of the attacker, they have successfully removed Block B from the chain." blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Ex-post reorg by a ⅔ attacker. They ignore the honest block B proposed in slot N+1 and instead vote for N as the head of the chain. Then they vote for block C in slot N+2. Meanwhile, because the fork choice rule sees attestations from the previous - not current - slot, they vote for Block B in slot N+2 while the dishonest validators vote for Block C. When slot N+3 arrives both forks have 66% weight and the tie break is decided by the lexicographic order of the competing block hashes. If this breaks in favour of the attacker, they have successfully removed Block B from the chain.</figcaption></figure><p>On the other hand, the same mechanism that protects extremely well against ex-post reorgs can be gamed by a sophisticated attacker - under very specific and unlikely network conditions - to create ex ante reorgs. For example, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2102.02247.pdf">this paper</a> shows how an attacking validator can create and attest to a block (B) for a particular slot <code>n + 1</code> but refrain from propagating it to other nodes on the network. Instead, they hold on to that attested block until the next slot <code>n + 2</code>. An honest validator proposes a block (C) for slot <code>n + 2</code>. Almost simultaneously, the attacker can release their withheld block (B) and their withheld attestations for it, and also attest to B being the head of the chain with their votes for slot <code>n+2</code>, effectively denying the existence of honest block C. When honest block D is released, the fork choice algorithm sees D building on top of B being heavier than D building on C. The attacker has therefore managed to remove the honest block C in slot <code>n + 2</code> from the canonical chain using a 1-block ex ante reorg. An <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=6vzXwwk12ZE">attacker with 34%</a> of the stake has a very good chance of succeeding in this attack because their votes give 68% weight to the attacker’s preferred fork, as opposed to 66% for the honest fork, as explained <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://notes.ethereum.org/plgVdz-ORe-fGjK06BZ_3A#Fork-choice-by-block-slot-pair">here</a>. This means they do not need to rely on manipulating honest validators to vote with them. In theory, though, this attack could be attempted with smaller stakes. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://econcs.pku.edu.cn/wine2020/wine2020/Workshop/GTiB20_paper_8.pdf">Neuder et al. (2020)</a> described this attack working with a 30% stake, but it was later shown to be viable with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2009.04987.pdf">2% of the total stake</a> and then again for a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/abs/2110.10086#">single validator</a> using balancing techniques we will examine in the next section.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/04616449011baedcae601eb98fdd403ae45ce73569321288046138b7da759a95.png" alt="A conceptual diagram of the one-block reorg attack described above (adapted from https://notes.ethereum.org/plgVdz-ORe-fGjK06BZ_3A#Fork-choice-by-block-slot-pair)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">A conceptual diagram of the one-block reorg attack described above (adapted from https://notes.ethereum.org/plgVdz-ORe-fGjK06BZ_3A#Fork-choice-by-block-slot-pair)</figcaption></figure><p>A successful reorg attacker cannot change history, but they can dishonestly alter the future. They did not require a majority of staked ether to do this, although their chance of success increases with their stake. Their reorg could feasibly allow them to double-spend or extract MEV by front-running large transactions. This attack could feasibly be extended out to more than one block, but the likelihood of success decreases as the reorg length increases.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/casparschwa/status/1454511836267692039">https://twitter.com/casparschwa/status/1454511836267692039</a></p><h3 id="h-bouncing-and-balancing" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Bouncing and Balancing</h3><p>A more sophisticated attack can split the honest validator set into discrete groups that have different views of the head of the chain. This is known as a balancing attack. In this case, the attacker waits for their chance to propose a block, and when it arrives they equivocate and propose two in the same slot. They send one block to one half of the honest validator set and the other block to the other half. The equivocation would be detected by the fork-choice algorithm and the block proposer would be slashed and ejected from the network, but the two blocks would still exist and would have about half the validator set attesting to each fork. For the cost of a single slashed validator, the attacker has managed to split the chain in two. Meanwhile, the remaining malicious validators hold back their attestations. Then, by selectively releasing the attestations favoring one or other fork to just enough validators just as the fork-choice algorithm executes, they are able to tip the network into seeing either fork having the most accumulated attestations. This can continue indefinitely, with the attacking validators maintaining an even split of validators across the two forks. Since neither fork can attract a 2/3 supermajority the chain would not finalize. The greater portion of the total stake the attacking validators control, the greater the probability that the attack is possible in any given epoch because the more likely they have a validator selected to propose a block in each slot. Even with just 1% of the total stake, the opportunity to mount a balancing attack would arise on average once every 100 epochs, which is not very long to wait.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vimeo.com/637529564">https://vimeo.com/637529564</a></p><p>A similar attack, also possible with a small percentage of the total stake is a bouncing attack. In this case, votes are again withheld by the attacking validators. This time, instead of releasing the votes to keep an even split between two forks, they use their votes at opportune moments to justify checkpoints that alternate between fork A and fork B. This flip-flopping of justification between two forks prevents there from being pairs of justified source and target checkpoints that can be finalized on either chain, halting finality.</p><h3 id="h-defending-bouncing-and-balancing-attacks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Defending bouncing and balancing attacks</h3><p>Both bouncing and balancing attacks rely on the attacking validators delaying their attestations until some opportune moment when they can have outsized impact on the network. Therefore, the attacks are only viable under unlikely conditions of network synchronicity as well as the attacker having very fine control over message timing by tightly coordinated colluding validators. Nevertheless, it is still necessary to close this attack vector. To guard against late-arriving messages influencing consensus, the weight of messages received late can be diminished compared to those received promptly. This is known as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/ethereum/consensus-specs/pull/2730">proposer-weight boosting.</a></p><p>For bouncing attacks, the fix was to update the fork-choice algorithm so that the latest justified checkpoint can only switch to that of an alternative chain during the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114">first 1/3 of the slots in each epoch</a>. This condition prevents the attacker from saving up votes to deploy later - the fork choice algorithm simply stays loyal to the checkpoint it chose in the first 1/3 of the epoch during which time most honest validators would have voted. The other defense against these delayed-voting attacks is to assign a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/ethereum/consensus-specs/pull/2353">greater weight to votes that arrive promptly</a> compared to votes that arrive late in each slot.</p><p>Combined, these measures create a scenario in which an honest block proposer emits their block very rapidly after the start of the slot, then there is a period of ~1/3 of a slot (4 seconds) where that new block might cause the fork-choice algorithm to switch to another chain. After that same deadline, attestations that arrive from slow validators are down-weighted compared to those that arrived earlier. This strongly favors prompt proposers and validators in determining the head of the chain and substantially reduces the likelihood of a successful balancing or bouncing attack. In essence, these defenses protect against attacks based on large network asynchronicity, even in the latter case described above where fine control over message release was not required. To a large extent, then, the risks of these types of attack have been mitigated by modifications to the fork-choice algorithm that favor prompt activity and penalize delays.</p><p>It is worth noting, that proposer boosting alone only defends against “cheap reorgs”, i.e. those attempted by attacker with a small stake. In fact, proposer-boosting itself can be gamed by larger stakeholders in yet another <em>ex ante</em> reorg attack. The authors of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/change-fork-choice-rule-to-mitigate-balancing-and-reorging-attacks/11127">this post</a> describe how an attacker with 7% of the stake can deploy their votes strategically to trick honest validators to build on their fork, reorging out an honest block. The honest validators that vote for the adversary’s fork do so promptly such that the attacker benefits from the proposer boost. Again, this attack was devised assuming ideal latency conditions that are very unlikely to be met in the wild. The greater the attacker’s stake, the greater the odds of a successful attack. However, the odds are still very long for the attacker, and the greater stake also means more capital at risk and a stronger economic disincentive.</p><h3 id="h-advanced-balancing-attacks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Advanced balancing attacks</h3><p>The aforementioned bouncing and balancing attacks relied upon malicious validators having very fine control over when their messages were received by other validators on the network, which have been mitigated effectively by proposer boosting. However, an <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2110.10086.pdf">additional attack has also been described</a> that does not rely on such fine grained control over network latency. In this case, the attacker requires a proposing validator in two subsequent slots (the odds of this happening in any two slots increase the more validators the attacker controls). One of the adversarial block proposers proposes a block in slot <code>n</code>, then the second adversarial block proposer proposes a conflicting block in slot <code>n+1</code>, creating a fork. Since neither block proposer equivocated, no slashing occurs. One nuance of the fork choice algorithm is that when forks have equal numbers of attestations the tie break is resolved in favor of the head with the smallest hash. In this example let’s say the tie breaks in favor of fork A. This is knowable by the attacker. The attacker can also estimate the time taken for half the validators on the network to submit their attestations. The withheld votes from slot <code>n</code> can be released at roughly the point in time when half the validators have voted. These are attestations from slot <code>n</code> in favor of Fork B. Half the validator set therefore vote for Fork A because they do not have knowledge of the additional attestations on fork B, the other half vote for a heavier Fork B. The adversarial votes withheld in <code>n+1</code> can be used to make up any shortfall on Fork B due to inaccuracy in the timing of the release of the withheld attestations.</p><p>This balancing attack was described for an idealized version of the fork-choice algorithm that has more predictable attestation timing than the fork-choice algorithm actually implemented in Ethereum’s consensus clients and it would be much harder to execute on the real chain. Distributing an attacker’s nodes across the network topology could help the attacker overcome this to some degree because their messages would propagate across the entire network faster than if they originate from one topological position.</p><p>A <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/balancing-attack-lmd-edition/11853">balancing attack specifically targeting the LMD</a> rule was also proposed, which was suggested to be viable in spite of proposer boosting. An attacker sets up two competing chains by equivocating their block proposal and propagating each block to about half the network each, setting up an approximate balance between the forks. Then, the colluding validators equivocate their votes, timing it so that half the network receive their votes for Fork A first and the other half receives their votes for Fork B first. Since the LMD rule discards the second attestation and keeps only the first for each validator, half the network sees votes for A and none for B, the other half sees votes for B and none for A. The authors describe the LMD rule giving the adversary “remarkable power” to mount a balancing attack.</p><p>This LMD attack vector was closed by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/ethereum/consensus-specs/pull/2845">updating the fork choice algorithm</a> so that it discards equivocating validators from the fork choice consideration altogether. Equivocating validators also have their future influence discounted by the fork choice algorithm. This prevents the balancing attack outlined above while also maintaining resilience against avalanche attacks.</p><h3 id="h-upcoming-developments" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Upcoming developments</h3><p>There are open proposals for protocol changes that add reorg resilience. One is <code>(block, slot)</code> attestations that require validators to explicitly link their vote to a specific slot - this effectively forces an attacker to use balancing techniques rather than simply withholding and releasing messages. There is also a proposed replacement for proposer-boost called <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/view-merge-as-a-replacement-for-proposer-boost/13739">view-merge</a> which freezes each attester’s view of the fork-choice just before the start of a slot (i.e. the time a block proposer can release a block). The proposer then builds a block that includes all the individual attester views from the entire preceding slot. This ensures that all (attesting and proposing) validators on the canonical chain share the same view at attestation time, preventing ex ante reorgs and balancing attacks.</p><p>The strongest defense against reorgs will be single-slot finality - a situation where the chain can finalize without having to go through the process of justification and finalization across ~12 minutes, instead it can finalize nearly instantaneously. In this case, all reorgs would be finality-reverting and therefore an attacker would require 66% of the total stake. Read more <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://notes.ethereum.org/@vbuterin/single_slot_finality#What-are-the-key-questions-we-need-to-solve-to-implement-single-slot-finality">here</a>.</p><h3 id="h-avalanche-attacks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Avalanche attacks</h3><p>Another class of attack, called <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/avalanche-attack-on-proof-of-stake-ghost/11854/3">avalanche attacks</a>, was described in a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2203.01315.pdf">March 2022 paper</a>. The authors suggest that proposer boosting - the primary defense against balancing and bouncing attacks - does not protect against some variants of avalanche attack. However, the authors also only demonstrated the attack on a highly idealized version of Ethereum’s fork-choice algorithm (they used GHOST without LMD).</p><p>To mount an avalanche attack, the attacker needs to control several consecutive block proposers. In each of the block proposal slots, the attacker withholds their block, collecting them up until the honest chain reaches an equal subtree weight with the withheld blocks. Then, the withheld blocks are released so that they equivocate maximally. This means that for, for example, 6 withheld blocks, the first honest block <code>n</code> competes with adversarial block <code>n</code> creating a fork, then all 5 remaining adversarial blocks all compete with the honest block at <code>n+1</code>. This means the fork building off adversarial blocks <code>n</code> and <code>n+1</code> now attracts honest attestations, because the blocks were released at the moment the weight of the truly honest chain equaled the weight of the adversarial chain. This can now be repeated with the withheld blocks that haven’t yet been built on top of, allowing the attacker to prevent the honest validators from following the honest head of the chain until their equivocating blocks are used up. If the attacker has more opportunities to propose blocks while the attack is underway they can use them to extend the attack, such that the more validators collude on the attack, the longer it can persist and the more honest blocks can be displaced from the canonical chain.</p><p>The avalanche attack is mitigated by the LMD portion of the LMD-GHOST fork choice algorithm. LMD means “last-message-driven” and it refers to a table kept by each validator containing the latest message received from other validators. That field is only updated if the new message is from a later slot than the one already in the table for a particular validator. In practice, this means that in each slot, the first message received is the one that it accepted and any additional messages are equivocations to be ignored. Put another way, the consensus clients don’t count equivocations - they use the first-arriving message from each validator and equivocations are simply discarded, preventing avalanche attacks.</p><h3 id="h-finality-delay" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Finality Delay</h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://econcs.pku.edu.cn/wine2020/wine2020/Workshop/GTiB20_paper_8.pdf">The same paper</a> that first described the low-cost single block reorg attack also described a finality delay (a.k.a “liveness failure”) attack that relies on the attacker being the block proposer for an epoch-boundary block. This is critical because these epoch boundary blocks become the checkpoints that Casper FFG uses to finalize portions of the chain. The attacker simply withholds their block until enough honest validators use their FFG votes in favor of the previous epoch-boundary block as the current finalization target. Then they release their withheld block. They attest to their block and the remaining honest validators do too creating forks with different target checkpoints. If they timed it just right, they will prevent finality because there will not be a 2/3 supermajority attesting to either fork. The smaller the stake, the more precise the timing needs to be because the attacker controls fewer attestations directly, and the lower the odds of the attacker controlling the validator proposing a given epoch-boundary block.</p><h3 id="h-note-on-long-range-attacks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Note on long range attacks</h3><p>There is also a class of attack specific to proof-of-stake blockchains that involves a validator that participated in the genesis block maintaining a separate fork of the blockchain alongside the honest one, eventually convincing the honest validator set to switch over to it at some opportune time much later. This type of attack is not possible on Ethereum because of the finality gadget that ensures all validators agree on the state of the honest chain at regular intervals (“checkpoints”). This simple mechanism neutralizes long range attackers because Ethereum clients simply will not reorg finalized blocks. New nodes joining the network do so by finding a trusted recent state hash (a “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/">‘weak subjectivity’</a> checkpoint”) and using it as a pseudo-genesis block to build on to of. This creates a ‘trust gateway’ for a new node entering the network before it can start to verify information for itself. However, the trust required to gather a checkpoint from a peer or block explorer or elsewhere <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/">does not add much</a> to the trust placed implicitly in the client developer teams, hence the subjectivity is “weak”. Because checkpoints are, by definition, shared by all nodes on the network, a dishonest checkpoint is symptomatic of a consensus failure and out-of-band social coordination will have to take over to save the honest validators anyway.</p><p>All of this points to the fact that it is very difficult to successfully attack Ethereum with a small stake. The viable attacks that have been described here require an idealized fork-choice algorithm, improbable network conditions, or the attack vectors have already been closed with relatively minor patches to the client software. This, of course, does not rule out the possibility of zero-days existing out in the wild, but it does demonstrate the extremely high bar of technical aptitude, consensus layer knowledge and luck required for a minority-stake attacker to be effective. From an attacker’s perspective their best bet might be to accumulate as much ether as possible and to return armed with a greater proportion of the total stake.</p><h3 id="h-denial-of-service" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Denial of Service</h3><p>Ethereum’s PoS mechanism picks a single validator from the total validator set to be a block proposer in each slot. This can be computed using a publicly known function and it is possible for an adversary to identify the next block proposer slightly in advance of their block proposal. Then, the attacker can spam the block proposer to prevent them swapping information with their peers. To the rest of the network, it would appear that the block proposer was offline and the slot would simply go empty. This could be a form of censorship against specific validators, preventing them from adding information to the blockchain. The cost to the attacker depends upon the bandwidth of the validator - it is much cheaper to launch a denial-of-service attack on a home staker than a professional with industrial-grade hardware and internet connection, making the hobbyist more vulnerable to censorship. There are some workarounds to this problem but they too favor professional validators over home stakers. for example, running multiple nodes and separating the block building from the network communication can give an additional layer of protection because the node identity and the validator identity are decoupled. The node runner might switch the identities around or recouple them at short notice to avoid denial of service attacks. Longer term, implementing single secret leader elections (SSLE) or non-single secret leader elections provide more robust mitigation against validator censorship because only the block proposer ever knows they have been selected and the selection is not knowable in advance. All validators submit a commitment to a secret into a pool which is repeatedly shuffled. A random commitment is chosen publicly. but only the chosen validator knows that is the one they submitted - this connection is obfuscated away from any other participant. This is not yet implemented, but is an active area of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/secret-non-single-leader-election/11789">research and development</a>.</p><h2 id="h-validators-controlling-greater-33percent-of-the-total-stake" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Validators controlling &gt;= 33% of the total stake</h2><p>Spreading control of the staked ether across more humans is safer than allowing it to concentrate into fewer hands. This is because the more stake one individual controls, the more influence they can have over Ethereum’s consensus. All of the attacks mentioned previously in this article become more likely to succeed when the attacker has more staked ether to vote with, and more validators that might be chosen to propose blocks in each slot. A malicious validator might therefore aim to control as much staked ether as possible.</p><p>33% of the staked ether is a benchmark for an attacker because with anything greater than this amount they have the ability to prevent the chain from finalizing without having to finely control the actions of the other validators. They can simply all disappear together. This is because for the chain to finalize, pairs of checkpoints must be attested by 2/3 of the staked ether. If 1/3 or more of the staked ether is maliciously attesting or failing to attest, then a 2/3 supermajority cannot exist. The defense against this is the inactivity leak. This is an emergency security measure that triggers after the chain fails to finalize for four epochs. The inactivity leak identifies those validators that are failing to attest or attesting contrary to the majority. The staked ether owned by these non-attesting validators is gradually bled-away until eventually they collectively represent less than 1/3 of the total so that the chain can finalize again.</p><p>The purpose of the inactivity leak is to get the chain finalizing again. However, the attacker also loses a portion of their staked ether. Assuming there is no slashable offense (equivocating, proposing multiple blocks…) and the attacking validators are simply failing to attest, their inactivity score is updated which signifies to the rest of the network that this validator is to be penalized in every epoch until their inactivity score returns to zero. The value of the penalty applied in each epoch scales with the length of time the chain has failed to finalize, denominated in epochs, but not only while there is a leak but also for a “refractory period” afterwards. While the inactivity leak is active, the inactive validators scores are increased by 4 in each epoch, while active validators scores decrease by 1. Once the inactivity leak deactivates (and the chain is finalizing again) the inactivity scores of all active validators decreases. This takes longer for validators who were inactive for longer because they have a larger inactivity score to deplete. Validators who remain inactive deplete their inactivity score more slowly. For a validator that stays offline for 100 epochs, their inactivity score would reach about 400. The magnitude of the penalty is calculated as:</p><p><code>inactivity_score * validator_balance / (inactivity_score_bias x inactivity_penalty_quotient)</code></p><p>Where the <code>inactivity score bias</code> is the number to increase the validator score by in each epoch and the <code>inactivity penalty quotient</code> is the square of the time taken to reduce the non-attesting validator’ balance to about 60% of its initial value, set to around 37.5 days. This means the longer the attacker blocks finality by failing to attest, the more of their stake is burned. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://eth2book.info/altair/part2/incentives/inactivity#inactivity-penalties">Upgrading Ethereum</a> shows a graph estimating the decrease in validator balance during and after a short (100 epoch, ~13.5 hour) inactivity leak for a validator who is always offline. After 135 epochs the validators’ balance has decreased from 32 ETH to 31.996 - a loss of 0.004 ETH. For an attacker to take control of 33% of the stake, they would have to run roughly 3,300,000 validators each staking at least 32 ETH. This means that their attack delaying finality would cost at least <code>0.004 x 3300000 = 13200</code> ETH which at current prices equates to about $39,600,000. Almost $40 million USD to delay finality for half a day, with minimal long term consequences on the chain itself. Of course, more persistent inactivity leaks are more expensive - in fact the magnitude of the penalty increases quadratically until the chain starts finalizing again - the longer the inactivity leak persists the faster the penalty accumulates! The precise costs of a finality-delaying attack by a validator or colluding group of validators depends on their initial balances, the time they remain offline and the time taken to regain finality. However, the bottom line is that persistent inactivity across validators representing 33% of the total staked ether is extremely expensive even though the validators have not been slashed.</p><p>Assuming that the Ethereum network is asynchronous (i.e. there are delays between messages being sent and received), an attacker controlling 34% of the total stake could cause double finality. This is because the attacker can equivocate when they are chosen to be a block producer, then double vote with all of their validators. This creates a situation where a fork of the blockchain exists, each with 34% of the staked ether voting for it. Each fork only requires 50% of the remaining validators to vote in its favor for both forks to be supported by a supermajority, in which case both chains can finalize (because 34% of attackers validators + half of remaining 66% = 67% on each fork). The competing blocks would each have to be received by about 50% of the honest validators so this attack is viable only when the attacker has some degree of control over the timing of messages propagating over the network so that they can nudge half the honest validators onto each chain. This is also why this attack requires network asynchrony - if all nodes received messages instantaneously they would immediately be aware of both blocks and handle the equivocation by rejecting the earlier-received block. The attacker would necessarily destroy their entire stake (34% of ~10 million ether with today’s validator set) to achieve this double finality because 34% of their validators would be double-voting simultaneously - a slashable offense with the maximum correlation penalty. The defense against this attack is only the very large cost of destroying 34% of the total staked ether. Recovering from this attack would require the Ethereum community to coordinate “out-of-band” and agree to follow one or other of the forks and ignore the other. The complexities associated with this social backstop are discussed later.</p><h2 id="h-validators-controlling-50percent-of-the-total-stake" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Validators controlling ~50% of the total stake</h2><p>At 50% of the staked ether, a mischievous pool of validators could in theory split the chain into two equally sized forks. Similar to the balancing attacks described earlier, an attacker could use just one of their validators to equivocate by proposing two blocks for the same slot. Then, instead of needing to manipulate half the network by carefully transmitting messages, they could simply use their whole 50% stake to vote contrarily to the honest validator set, thereby maintaining two forks and preventing finality. After four epochs the inactivity leak would activate on both forks because each would see half of their validators failing to attest. Each fork would leak away the stake of opposing halves of the validator set, eventually resulting in both chains finalizing with different validators representing a 2/3 supermajority. At this point, the only option is to fall back on a social recovery as described later on. However, it seems highly unlikely that an adversarial group of validators could consistently control precisely 50% of the total stake given a degree of flux in honest validator numbers, network latency etc, but perhaps there is a way, with slightly over 50% of the stake, they could dynamically adjust the portion of their pool voting in each slot to maintain a perfect balance between two forks. While the risk of successful attack undoubtedly increases with the size of the adversarial stake, the attack vector associated with exactly 50% of the stake seems unlikely to be successfully exploited - the huge cost of mounting such an attack combined with the low likelihood of success appears to be a strong disincentive for a rational attacker.</p><p>At just over 51% of the total stake, however, the attacker could dominate the fork choice algorithm. In this case, the attacker would be able to attest with the majority vote, giving them sufficient control to do short reorgs without needing to fool honest clients. 51% of the stake does not allow the attacker to change history, but they have the ability to influence the future by applying their majority votes to favorable forks and/or reorging inconvenient non-justified blocks out of the chain. The honest validators would follow suit because their fork choice algorithm would also see the attacker’s favored chain as the heaviest, so the chain could finalize. This enables the attacker to censor certain transactions, do short-range reorgs and extract maximum MEV by reordering blocks in their favor. Like proof-of-work chains, a 51% attack is extremely problematic. The defense against this is the huge cost of a majority stake (currently just under $19 billion USD) which is put at risk by an attacker because the social layer is likely to step in and adopt an honest minority fork, devaluing the attacker’s stake dramatically.</p><h2 id="h-attackers-controlling-greater66percent-of-the-total-stake" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Attackers controlling &gt;=66% of the total stake</h2><p>An attacker with 66% or more of the total staked ether can finalize their preferred chain without having to coerce any honest validators. The attacker can simply vote for their preferred fork and then finalize it, simply because they can vote with a dishonest supermajority. As the supermajority stakeholder, the attacker would always control the contents of the finalized blocks, with the power to spend, rewind and spend again, censor certain transactions and reorg the chain at will. By purchasing additional ether to control 66% rather than 51%, the attacker is effectively buying the ability to do ex post reorgs and finality reversions (i.e. change the past as well as control the future). The cost of 66% of the total stake is currently about $25 billion USD. The only real defense here is to fall back to the social layer to coordinate adoption of an alternative fork. We can explore this in more detail in the next section.</p><h2 id="h-layer-0-the-last-line-of-defense" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Layer-0: the last line of defense</h2><p>What happens when the coded defenses are breached and an attacker becomes able to finalize a dishonest chain?</p><p>This scenario can arise in multiple ways - most obviously when the attacker has a supermajority stake and can simply finalize with their own votes, or 51% plus additional attestations from honest validators. With 34% of the stake and some control over message delivery across the network the attacker can finalize two forks. There are also scenarios where a reorg’d chain could be finalized as a consequence of the inactivity leak. If an attacker successfully equivocates and divides the validator set across two forks, the inactivity leak will activate on both. The question then becomes - will the honest or dishonest validators regain finality first? If the honest validators finalize first, the honest chain becomes canonical - the fork choice algorithm in all clients across the network accept the finalized portion of the chain and Ethereum is back in the control of honest players. However, if the dishonest validators manage to finalize the chain, the Ethereum community is in a very difficult situation. The canonical chain includes a dishonest section baked into its history, while honest validators end up being punished for attesting to an alternative (honest) chain. A third, (unlikely) possibility is a permanent network schism where validators on one fork are somehow unaware of their counterparts on the opposing fork. This would create two forks that both finalize independently of one another, each one leaking away the stakes of the opposite set of validators. These two chains could then never be re-united because they would have different finalized checkpoints. A corrupted-but-finalized chain could also result from a bug (rather than an attack) in a majority client. On Ethereum’s execution layer the go-ethereum (Geth) client overwhelmingly dominates, being run by &gt;85% of all nodes. On the consensus layer, Prysm currently dominates - until recently being run by &gt;66% of the total validators (now down to ~50% after a sustained community campaign). It is possible that bugs in majority execution or consensus clients could halt finality or lead to incorrect data being finalized. On the Kiln testnet a<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/@prysmaticlabs/HyZqgTA-c"> bug in Prysm</a> affected block production - this was inconsequential because the nodes had a roughly equal share of four different clients, but the same bug on mainnet would have been experienced by &gt;66% of the clients. There are therefore several (very low probability) routes to a dishonest finalized chain. They all require either an enormous investment in staked ether (which is then put at risk by the attacker) or very sophisticated manipulation of the validator set, which has so far only been shown to be feasible under idealized conditions and have anyway been mitigated by software updates. Nevertheless, these scenarios cannot be ruled out as impossible. In the end, the ultimate fallback is to rely on the social layer - Layer 0 - to resolve the situation.</p><p>One of the strengths of Ethereum’s PoS consensus is that there are a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://youtu.be/1m12zgJ42dI?t=1712">range of defensive strategies</a> that the community can employ in the face of an attacker. A minimal response could be to forcibly exit the attackers’ validators from the network without any additional penalty. To re-enter the network the attacker would have to join an activation queue that ensures the validator set grows gradually. For example, adding enough validators to double the amount of staked ether takes about 200 days, effectively buying the honest validators 200 days before the attacker can attempt another 51% attack. However,the community could also decide to penalize the attacker more harshly, by revoking past rewards or burning some portion (up to 100%) of their staked capital.</p><p>Whatever the penalty imposed on the attacker, the community also has to decide together whether the dishonest chain, despite being the one favored by the fork choice algorithm coded into the Ethereum clients, is in fact invalid and that the community should build on top of the honest chain instead. Honest validators could collectively agree to build on top of a community-sanctioned fork of the Ethereum blockchain that might, for example, have forked off the canonical chain before the attack started or have the attackers’ validators forcibly removed. Honest validators would be incentivized to build on this chain because they would avoid the penalties applied to them for failing (rightly) to attest to the attacker’s chain. Exchanges, on-ramps and applications built on Ethereum would presumably prefer to be on the honest chain and would follow the honest validators to the honest blockchain. However, this would be an extremely messy governance challenge. Some users and validators would undoubtedly lose out as a result of the switch back to the honest chain, transactions in blocks validated after the attack could potentially be rolled back, disrupting the application layer, and it quite simply undermines the ethics of some users who tend to believe “code is law”. Exchanges and applications will most likely have linked off-chain actions to on-chain transactions that may now be rolled back, starting a cascade of retractions and revisions that would be hard to unpick fairly, especially if the ill-gotten gains have been mixed, deposited into DeFi or other derivatives with secondary effects for honest users. Undoubtedly some users, perhaps even institutional ones, would have already benefited from the dishonest chain either by being shrewd or by serendipity, and might oppose a fork to protect their gains. There have been calls to rehearse the community response to &gt;51% attacks so that a sensible coordinated mitigation could be executed quickly. There is some useful discussion by Vitalik on ethresear.ch <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/timeliness-detectors-and-51-attack-recovery-in-blockchains/6925">here</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/responding-to-51-attacks-in-casper-ffg/6363">here</a> and on Twitter <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/skylar_eth/status/1551798684727508992?s=20&amp;t=oHZ1xv8QZdOgAXhxZKtHEw">here</a>.</p><p>Governance is already a complicated topic. Managing a Layer-0 emergency response to a dishonest finalizing chain would undoubtedly be challenging for the Ethereum community, but it <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/history/#dao-fork-summary">has happened</a> - <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/history/#tangerine-whistle">twice</a> - in Ethereum’s history). Nevertheless, there is something fairly satisfying in the final fallback sitting in meatspace. Ultimately, even with this phenomenal stack of technology above us, if the worst were ever to happen real people would have to coordinate their way out of it.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/fd39aee6053f4bed232d121017006297a0656efbe0ef0505612399ab1e971f3c.jpg" alt="via @Owocki (Twitter)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">via @Owocki (Twitter)</figcaption></figure><h2 id="h-summary" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Summary</h2><p>This article has explored some of the ways attackers might attempt to exploit Ethereum’s proof of stake consensus mechanism. Reorgs and finality delays were explored for attackers with increasing proportions of the total staked ether. Overall, a richer attacker has more chance of success because their stake translates to voting power they can use to influence the contents of future blocks. At certain threshold amounts of staked ether, the attacker’s power levels up:</p><p><strong>33%:</strong> delay finality</p><p><strong>34%:</strong> cause double finality</p><p><strong>51%:</strong> censorship, control over blockchain future</p><p><strong>66%:</strong> censorship, control over blockchain future and past</p><p>There are also a range of more sophisticated attacks that require small amounts of staked ether but rely upon a very sophisticated attacker having fine control over message timing to sway the honest validator set in their favor.</p><p>Overall, despite these potential attack vectors the overall risk is relatively low. This is because of the huge cost of the staked ether put at risk by an attacker aiming to overwhelm honest validators with their voting power. The built-in “carrot and stick” incentive layer protects against most malfeasance, especially for low-stake atackers. More subtle bouncing and balancing attacks are also unlikely to succeed because real network conditions make the fine control of message delivery to specific subsets of validators very difficult to achieve, and client teams have quickly closed the known bouncing, balancing and avalanche attack vectors with simple patches.</p><p>34%, 51% or 66% attacks would likely require out-of-band social coordination to resolve. While this would likely be painful for the community, the ability for a community to respond out-of-band is a strong disincentive for an attacker. The Ethereum social layer is the ultimate backstop - a technically successful attack could still be neutered by the community agreeing to adopt an honest fork. There would be a race between the attacker and the Ethereum community - the (currently) $25 billion dollars spent on a 66% attack would probably be obliterated by a successful social coordination attack if it was delivered quickly enough, leaving the attacker with heavy bags of illiquid staked ether on a known dishonest chain ignored by the Ethereum community. The likelihood that this would end up being profitable for the attacker is sufficiently low as to be an effective deterrent. This is why investment in maintaining a cohesive social layer with tightly aligned values is so important.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/060f57c30b6046fae7ee487c56f39a5bd475b4c0adaa580ccfd51e7186c4ecc8.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Gitcoin FDD squad onboarding]]></title>
            <link>https://paragraph.com/@jmcook/gitcoin-fdd-squad-onboarding</link>
            <guid>IDpXFDQ4od6WUwtWLzMH</guid>
            <pubDate>Mon, 04 Apr 2022 16:04:14 GMT</pubDate>
            <description><![CDATA[People make or break DAOs so onboarding good people is critical. However, it is also notoriously difficult to get onboarding right, and not only for DAOs - onboarding challenges cost businesses millions every year and there is a growing recognition that organizations with strong onboarding protocols outperform those with a "sink or swim" approach. Onboarding is where organizations and contributors make their first impressions on each other, set expectations and establish the tone of their new...]]></description>
            <content:encoded><![CDATA[<p><strong>People make or break DAOs</strong> so onboarding good people is critical. However, it is also notoriously difficult to get onboarding right, and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://www.mybrightspark.com/the-key-challenges-of-onboarding/">not only for DAOs</a> - onboarding challenges <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.peoplemanagement.co.uk/voices/comment/poor-onboarding-costs-businesses-millions#gref">cost businesses millions</a> every year and there is a growing recognition that organizations with strong onboarding protocols outperform those with a &quot;sink or swim&quot; approach. Onboarding is where organizations and contributors make their <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.bamboohr.com/blog/first-impressions-onboarding/">first impressions</a> on each other, set expectations and establish the tone of their new relationship. Getting this wrong has <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.businessnewsdaily.com/9936-consequences-poor-onboarding.html">substantial costs</a> in terms of reputation, time, morale, money and opportunity costs when potentially great contributors decide to go elsewhere.</p><p>In this article I will reflect on my own onboarding experiences into the Fraud Detection and Defense (FDD) squad in Gitcoin DAO. These reflections might help the ongoing efforts to refine the onboarding process and also provide some pointers for prospective contributors.</p><h2 id="h-dao-level-onboarding" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">DAO-level onboarding</h2><p>In December 2021 I decided to start contributing to the DAO. Gitcoin was the obvious choice for me. Public goods funding was my gateway into the web3 space a few years earlier and I was arriving thoroughly green-pilled. I&apos;d also been a winner of a Gitcoin RFP a few months before and met a few people from the DAO as a result. I was happy to chat with some of the core DAO members about potential routes to work. These initial conversations led me to the FDD stream, where I felt my background in data science could be put to good use. The first steps were quite organic and very informal, mostly consisting of discord DMs.</p><p>Everyone I spoke to was extremely friendly, welcoming and encouraging of my onboarding into the DAO, but at the same time I didn&apos;t feel like I was getting the information I needed to progress. I eventually got access to some previously-hidden channels in the Gitcoin DAO discord and was directed to a DAO-level onboarding call. The call gave very useful context to the DAO and made clear that the appropriate next step was to seek out members of a specific workstream and to submit a Typescript application form.</p><p>I started joining some of the community calls, lurking in some of the channel discussions to try to absorb some of the discussions and occasionally chipping in some comments, but overall I felt fairly directionless. The outcome from most conversations was redirection to another DAO member, who would then redirect me to someone else. This was a double-edged sword because on the one hand I got to connect with several key people in the DAO but at the same time didn’t have a single point of contact that could give straight answers. I knew the people I was connected to were busy and I was wary of becoming burdensome by badgering them for action. I hadn&apos;t had any response to the Typescript application form I had submitted. At this point, my onboarding stalled, I gradually got sidetracked with other projects and I started to drift away from the DAO.</p><h2 id="h-fdd-onboarding" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">FDD-onboarding</h2><p>A couple of months later, my casual surfing of the DAO discord led me to a conversation about a vacancy for a technical writer in the fdd squad. This seemed like a perfect opportunity to rekindle my attempts to onboard, so I immediately reached out to flag myself as a candidate. Again, this began with informal DM exchanges with some core contributors. I gave some credentials and some examples of my writing and the conversation quickly progressed to organizing a call that would act as an interview. At the same time I was directed to another DAO member who was better placed to give some specific onboarding advice. At this point I was very enthusiastic but still a little clueless about the specific actions I needed to take to move things forwards. A key unblocker was the link to a fdd Notion document that included a lot of answers to my outstanding questions. I hadn&apos;t seen this document before (the existence of these Notion docs is a good example of an unknown unknown - it&apos;s really useful once you have it but hard to know that exists to look for it)!</p><h2 id="h-interview" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Interview</h2><p>The interview was the point in time when things really started moving. The call itself was a friendly conversation where I explained what skills I have to offer and how they might be useful to the fdd stream, and reciprocally the interviewers gave some more context that helped me to understand how the stream operates. I was given dates and times for weekly meetings and luckily I was available for one that evening. At the same time, I had an open dialogue with my onboarding guide and things were gradually starting to crystallize.</p><h2 id="h-the-deluge" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The deluge</h2><p>Immediately after the interview I started getting tagged into discord threads and receiving DMs from key people from inside the fdd stream. Some DMs were simple introductions, some included onboarding advice, some suggestions for collaborations or requests for input. I attended the weekly call and introduced myself to the fdd team, and tried to contribute to the discussions as best I could. In the span of twenty-four hours my situation had flipped from information drought to deluge. I set up a kanban that evening to try to keep track of the links, threads, slide-decks, Github repositories and Notion docs that had flooded in during the day (and, due to time differences, the night).</p><p>It would have been very easy to be overwhelmed at this point, but this was mitigated by the very welcoming team and truly intriguing work. I decided to lean into it - I was not feeling entirely in control but I had enough information, connections and ideas to be pretty confident things would fall into place quickly if I just kept digging in. And indeed, after the initial flood the signal to noise ratio of the incoming information improved drastically and I started to get more comfortable with my new role in the DAO. A few days later I was directed to an onboarding checklist, which helped to identify the missing pieces. It would have been helpful to have this right at the start to constrain the unknown unknowns and indicate what questions should have been asked early on (in fairness I have since learned I joined just before this checklist was launched).</p><p>The next step was simply to start working on specific projects. I began by selectively progressing conversations about opportunities that piqued my interest. I was very clear in my own mind that the aim of each of these conversations was to nail down a specific brief with deliverables and timeline, and also make sure the person I was speaking to had authority to sign off some budget to cover my time on the task.</p><p>At this point I feel much more comfortable navigating the DAO. It requires a bit of hustle, but the opportunities are abundant and the support plentiful. At the time of writing I am at the beginning of a “trial period”. This is a six-week probationary phase before becoming a “full contributor”. Transitioning out of the trial period will require a reflective discussion with the onboarding guides about what has been achieved so far, and completion of a questionnaire about the onboarding.</p><p>To recap the onboarding steps I took to get to this point:</p><ol><li><p>Initial conversations with DAO members</p></li><li><p>DAO-level onboarding including onboarding call</p></li><li><p>Typescript application form to fdd</p></li><li><p>Access to hidden discord channels including fdd</p></li><li><p>Second round of DMs (this time responding to specific call)</p></li><li><p>Connection to onboarding guide(s)</p></li><li><p>Access to fdd Notion doc</p></li><li><p>Interview</p></li><li><p>Start attending weekly calls</p></li><li><p>Receive onboarding checklist and backfill outstanding items</p></li><li><p>Start working on specific projects</p></li></ol><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/1375bac34ba66ee3504126c8b3cad203eaaafa8df760d9e6024104501f7c50f9.jpg" alt="Onboarding can feel like arriving in a busy new city- the local guide is invaluable for finding the right path. (Ph: Benh LIEU SONG (Flickr) - Shibuya Scramble Crossing, CC BY-SA 2.0)" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Onboarding can feel like arriving in a busy new city- the local guide is invaluable for finding the right path. (Ph: Benh LIEU SONG (Flickr) - Shibuya Scramble Crossing, CC BY-SA 2.0)</figcaption></figure><h2 id="h-reflections" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Reflections</h2><p>My experience of onboarding is, I think, fairly typical. I very nearly drifted away from the DAO after my initial attempt to onboard fizzled, but turned it around later on with a lot of help from personal guides. There are several specific lessons I learned during the process:</p><p><strong>1) DAO onboarding is a two-way street.</strong></p><p>Responsibility for onboarding doesn&apos;t sit entirely with the DAO. Chances are the members doing the onboarding are extremely busy managing their own contributions to the DAO and don&apos;t necessarily have individual responsibility for bringing new people into the fold. They might be doing it voluntarily and/or part time, alongside many other tasks. Arriving at the DAO without a specific offering is not helpful (although I am told this constraint is relaxed somewhat in other streams relative to FDD). I made this mistake first time around - I arrived &quot;wanting to contribute to the DAO&quot; and a vague sense that I had some useful skills, but lacking a clear niche I could fill. Second time I arrived able to speak to specific people about specific tasks that I could tackle. I already knew the gap I could fill from reading discussions on the discord. At the same time, my expectations about the onboarding procedure had been adjusted and I arrived with a bit more hustle in my mindset.</p><p><strong>2) The personal touch is important.</strong></p><p>The DAO-level onboarding is like taking a train to a new city at rush hour - sure you have access but you don&apos;t know where to go or what to do to make the best of it, and the pace of activity can be overwhelming. To overcome this, a local guide is priceless. The stream-level onboarding provided me with this local guide (several, in fact). I&apos;ve seen this more personal approach to onboarding compared to a hotel concierge service. It can also be thought of as a personal Morpheus - a green-pill guide that can &quot;show how deep the rabbit hole goes&quot; and how best to explore it. They can help to make introductions, explain procedures, point out relevant meetings and resources, and generally provide encouragement. Person-to-person interactions were certainly invaluable in my onboarding experience.</p><p><strong>3) Some structure is good too.</strong></p><p>The personal touch is great for small numbers of contributors, but it puts a lot of load on the guide and also requires the contributor to know what questions to ask. The &quot;unknown unknowns&quot; are sometimes quite insidious and often only reveal themselves too late, for example after a key meeting has already been missed or an opportunity has already expired. The onboarding guide can&apos;t really be expected to keep track of all these various unknowns especially considering they are probably onboarding multiple people simultaneously, and the new-starter doesn&apos;t necessarily know they exist to ask about them. Also, it&apos;s not always clear to the contributor that things are moving behind the scenes - they might feel forgotten despite people working away to onboard them but not keeping the communication quite up to date. Some additional structure to the onboarding that allows a new contributor to track their own progress and see where in the onboarding process they are could really be helpful.</p><p><strong>4) Sometimes the communication will be very asynchronous.</strong></p><p>By its nature the DAO is international across many time zones. I seem to be offset from my key contacts by a solid 8 hours, meaning my discord pings like crazy just when I&apos;m eating dinner or settling down to sleep. Similarly, questions asked in the morning might not get answers until the evening. This is just a reality that has to be embraced - communication within the DAO will often be very asynchronous and some things will necessarily be delayed. Rather than being a drag, this can be a valuable opportunity to focus on some deep work, take time to think over and research responses before sending them or to resolve issues independently to minimize the load on other DAO members.</p><p><strong>5) The Gitcoin discord is its own challenge.</strong></p><p>The Gitcoin discord has a lot of very active channels and without sharp discord skills it is difficult to keep track of multiple conversations, mentions, shared files etc. Experience on other, smaller servers probably isn’t really sufficient preparation for a server like Gitcoin’s where the sheer size and pace means information can be lost to history in a matter of minutes! Time browsing the discord before onboarding is certainly valuable, but some explicit onboarding support relating to navigating the discord server could be really useful for new starters.</p><p><strong>6) The DAO onboarding currently does several things very well.</strong></p><p>The personal route to onboarding into the FDD stream is priceless. I was immediately made to feel valued and welcomed and it really made me want to contribute with energy and enthusiasm. The guides were helpful, honest and available. The stream really does a good job with this. The DAO-level onboarding call was also a useful starting point and there were definitely useful pointers in there about how to take the next steps. I think in some cases the combination of this DAO-level onboarding and the personal route at the stream-level is already a viable system for onboarding new starters.</p><p><strong>7) There are bottlenecks to address.</strong></p><p>Although the current system worked for me in the end, it very nearly didn&apos;t. This was my own fault - my mindset was all wrong - I was expecting too much scaffolding without doing a good job of communicating what I bring to the DAO in return. On the other hand, some more easily accessible onboarding information might have helped me come to this realization sooner and arrived better prepared. A more robust formalized procedure, communicated up-front could help ensure someone has oversight of these actions and the new starter knows what to chase up and when. Similarly, it would be easier to stay motivated during the onboarding if new-starters can know where they are in the process and what is yet to come.</p><p>In summary, I am delighted to have onboarded into the fdd squad. I am actively working on interesting projects, connecting with great people and generally enjoying getting stuck in. However, there are clearly improvements that could be made to the onboarding procedure. Similar issues are acutely felt in all kinds of organizations and there may well be examples of best practise or lessons from the growing literature that can be applied in the context of DAOs. There are also several other <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://creators.mirror.xyz/ggSQQlTSGqJ2_U7HVNjm4f3s98on5EfUyR9rW_z3fw0">DAOs</a> that are beginning to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mirror.xyz/daomstr.eth/YNr-PuwUgp9LoK6xlrHhLlfKKo2GHzR8g00wBnfhQbc">report more openly</a> about their onboarding challenges and successes. Diving into this material is way outside the scope of this article, but I&apos;m aiming to come back to it later in collaboration with some other DAO members. These challenges are only going to become more acute as DAOs become more popular and onboarding bottlenecks are exacerbated by scale, so it seems like an opportune time to allocate some resources to developing a more robust onboarding strategy.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Ethereum's energy consumption ]]></title>
            <link>https://paragraph.com/@jmcook/ethereum-s-energy-consumption</link>
            <guid>hx33oNESZXKrYDp5285o</guid>
            <pubDate>Wed, 09 Feb 2022 17:31:04 GMT</pubDate>
            <description><![CDATA[After eight years of proof-of-work mining, Ethereum finally became green on 15th September 2022. It did so by swapping out proof-of-work mining for a new proof-of-stake based consensus mechanism, using ETH instead of energy to secure the network. Ethereum&apos;s proof-of-stake mechanism now uses just ~0.0026 TWh/yr across the entire global network. The energy consumption estimate for Ethereum comes from a CCRI (Crypto Carbon Ratings Institute)(opens in a new tab)↗ study. They generated bottom...]]></description>
            <content:encoded><![CDATA[<p>After eight years of proof-of-work mining, Ethereum finally became green on 15th September 2022. It did so by swapping out proof-of-work mining for a new <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/developers/docs/consensus-mechanisms/pos">proof-of-stake</a> based consensus mechanism, using ETH instead of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/developers/docs/consensus-mechanisms/pow">energy to secure the network</a>. Ethereum&apos;s proof-of-stake mechanism now uses just <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://carbon-ratings.com/eth-report-2022">~0.0026 TWh/yr</a> across the entire global network.</p><p>The energy consumption estimate for Ethereum comes from a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://carbon-ratings.com/">CCRI (Crypto Carbon Ratings Institute)(opens in a new tab)↗</a> study. They generated bottom-up estimates of the electricity consumption and the carbon footprint of the Ethereum network (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://carbon-ratings.com/eth-report-2022">see the report(opens in a new tab)↗</a>). They measured the electricity consumption of different nodes with various hardware and client software configurations. The estimated <strong>2,601 MWh</strong> (0.0026 TWh) for the network’s annual electricity consumption corresponds to yearly carbon emissions of <strong>870 tonnes CO2e</strong> applying regional-specific carbon intensity factors. This value changes as nodes enter and leave the network - you can keep track using a rolling 7-day average estimate by the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ccaf.io/cbnsi/ethereum">Cambridge Blockchain network Sustainability Index(opens in a new tab)↗</a> (note that they use a slightly different method for their estimates - details available on their site).</p><p>To contextualize Ethereum&apos;s energy consumption, we can compare annualized estimates for some other industries. This helps us better understand whether the estimate for Ethereum is high or low.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b6dcdb1ba422a1eed6ccac9dc4c856caebf01aab62093405faf0b324a3cf5e45.png" alt="Energy consumption across several industries, from ethereum.org" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="">Energy consumption across several industries, from ethereum.org</figcaption></figure><p>The chart above displays the estimated yearly energy consumption in TWh/yr for Ethereum, compared to several other industries.</p><p>It is complicated to get accurate estimates for energy consumption, especially when what is being measured has a complex supply chain or deployment details that influence its efficiency. Consider Netflix or Youtube as examples. Estimates of their energy consumption vary depending upon whether they only include the energy used to maintain their systems and deliver content to users (<em>direct expenditure</em>) or whether they include the expenditure required to produce content, run corporate offices, advertise, etc (<em>indirect expenditure</em>). Indirect usage could also include the energy required to consume content on end-user devices such as TVs, computers and mobiles, which in turn depends on which devices are used.</p><p>There is some discussion of this issue on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.carbonbrief.org/factcheck-what-is-the-carbon-footprint-of-streaming-video-on-netflix">Carbon Brief(opens in a new tab)↗</a>. In the table above, the value reported for Netflix includes their self-reported <em>direct</em> and <em>indirect</em> usage. Youtube only provides an estimate of their own <em>direct</em> energy expenditure, which is around <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gstatic.com/gumdrop/sustainability/google-2020-environmental-report.pdf">12 TWh/yr(opens in a new tab)↗</a>.</p><p>The table and chart above also include comparisons to Bitcoin and proof-of-work Ethereum. It is important to note that the energy consumption of proof-of-work networks is not static - it changes day-to-day. The value used for proof-of-work Ethereum was from just before <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereumorgwebsitedev01-updateenergypage.gatsbyjs.io/en/roadmap/merge/">The Merge</a> to proof-of-stake, as predicted by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://digiconomist.net/ethereum-energy-consumption">Digiconomist(opens in a new tab)↗</a>. Other sources, such as the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ccaf.io/cbnsi/ethereum/1">Cambridge Blockchain Network Sustainability Index(opens in a new tab)↗</a> estimate the energy consumption to have been much lower (closer to 20 TWh/yr). Estimates for Bitcoin&apos;s energy consumption also vary widely between sources and it is a topic that attracts a lot of nuanced <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.coindesk.com/business/2020/05/19/the-last-word-on-bitcoins-energy-consumption/">debate(opens in a new tab)↗</a> about not only the amount of energy consumed but the sources of that energy and the related ethics.</p><p>The point of this paragraph isn’t to cast any kind of judgement, but to put the reduction in energy achieved by The Merge into context.</p><h2 id="h-energy-debates" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Energy debates</h2><p>Energy consumption does not necessarily map precisely to environmental footprint because different projects might use different energy sources, for example a lesser or greater proportion of renewables.</p><p>For example, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ccaf.io/cbnsi/cbeci/comparisons">Cambridge Bitcoin Electricity Consumption Index(opens in a new tab)↗</a> indicate that the Bitcoin network demand could theoretically be powered by gas flaring or electricity that would otherwise be lost in transmission and distribution.</p><p>Ethereum&apos;s route to sustainability was to replace the energy-hungry part of the network with a green alternative. It is also frequently argued that all industries (with exceptions) generally plug into the same grid, and rather than moralizing about which specific uses are acceptable and which aren’t (which will always be divisive), it would be better to focus on migrating to a more sustainable and efficient grid infrastructure.</p><p>Within the blockchain world there are divergent opinions about the value of PoW. Supporters of PoW argue that the large energy expenditure is what gives the network security and is therefore necessary. Proof-of-work is more <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Lindy_effect">Lindy</a> than proof-of-stake today. However, many believe that at least equal security can be derived from proof-of-stake based consensus, implying that high energy consumption is a choice, not a necessity for securing a chain.</p><p>It has also been argued that access to energy is more decentralized than access to capital to stake. The counter-argument is the centralizing power of hardware escalation - most mining is done by industrial players that can afford to benefit from economies of scale and frequent hardware upgrades that individuals can’t access.</p><p>In the end, the issue always reduces down to an individual choice about whether the energy expenditure is <em>worth it.</em> One person might suggest that proof-of-work mining is justified because the energy is being spent creating a form of money which increases individual freedom, and anyway only individuals and markets should be able to determine whether a particular use of energy is justified. Others would argue that proof-of-stake doesn’t require large amounts of energy to be consumed in the first place and is therefore ethically superior.</p><p>It is also relevant that part of the reason ether has value, liquidity and relatively wide distribution is because a real-world commodity (energy) was expended in the past, when Ethereum ran on proof-of-work. That value is now baked into the token making it a viable asset for securing the network. In a way, Ethereum’s higher carbon past has enabled its low carbon future.</p><p>At the end of the day, at <em>any</em> level of expenditure, some people believe it to be worth it, others don’t.</p><p>You can browse energy consumption and carbon emission estimates for many industries on the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ccaf.io/cbnsi/ethereum">Cambridge Blockchain Network Sustainability Index site(opens in a new tab)↗</a>.</p><h2 id="h-ethereums-carbon-debt" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Ethereum’s carbon debt</h2><p>Ethereum originally used proof-of-work which had a much greater environmental cost than the current proof-of-stake mechanism. It switched over in September 2022, after 8 years on proof-of-work.</p><p>From the very beginning, Ethereum planned to implement a proof-of-stake based consensus mechanism, but doing so without sacrificing security and decentralization took years of focused research and development. Therefore, a proof-of-work mechanism was used to get the network started. Proof-of-work requires miners to use their computing hardware to calculate a value, expending energy in the process. Just before the switch to proof-of-stake, the energy consumption was closer to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://digiconomist.net/ethereum-energy-consumption">78 TWh/yr</a>, comparable to that of Uzbekistan, with a carbon emission equivalent to that of Azerbaijan (33 MT/yr).</p><p>CCRI examined the impact of Ethereum’s transition from proof-of-work to proof-of-stake. The annualized electricity consumption was reduced by more than <strong>99.988 %</strong>. Likewise, Ethereum’s carbon footprint was decreased by approximately <strong>99.992 %</strong> (from 11,016,000 to 870 tonnes CO2e). Therefore, the environmental cost of securing the network is drastically reduced.</p><h2 id="h-why-is-proof-of-stake-low-energy" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Why is proof-of-stake low-energy?</h2><p>Under proof-of-stake, PoW puzzle-solving is not necessary. Miners are replaced by validators who perform the same function except that instead of expending their assets in the form of computational work, they stake ether as collateral against dishonest behavior. If the validator is lazy (offline when they are supposed to fulfill some validator duty) their staked ether can slowly leak away, while provably dishonest behavior results in the staked assets being &quot;slashed&quot;. This strongly incentivizes active and honest participation in securing the network. This disincentive structure allows for network security with proof-of-stake while eliminating the need to expend energy on brute-force computations. Detailed explanations of the network security under proof-of-stake can be found here and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vitalik.ca/general/2017/12/31/pos_faq.html">here</a>.</p><p>The hardware arms race that incentivizes PoW miners to consistently upgrade their hardware to more and more energy-hungry systems is removed under PoS. Validators can participate with normal home computers or even low power devices such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum-on-arm-documentation.readthedocs.io/en/latest/user-guide/ethereum2.0.html">Raspberry Pi</a>. This removes a significant barrier to entry and widens participation in securing the network. However, there is still a substantial cost to becoming an Ethereum validator - the minimum stake is 32 ether.</p><h2 id="h-a-green-application-layer" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">A green application layer</h2><p>While Ethereum&apos;s energy consumption is very low, there is also a substantial, growing, and highly active regenerative finance (ReFi) community building on Ethereum. ReFi applications use DeFi components to build financial applications that have positive externalities benefiting the environment. ReFi is part of a wider <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Solarpunk">&quot;solarpunk&quot;</a> movement that is closely aligned with Ethereum and aims to couple technological advancement and environmental stewardship. The decentralized, permissionless, and composable nature of Ethereum makes it the ideal base layer for the ReFi and solarpunk communities.</p><p>Web3 native public goods funding platforms such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gitcoin.co/">Gitcoin</a> run climate rounds to stimulate environmentally conscious building on Ethereum&apos;s application layer. Through the development of these initiatives (and others, e.g. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/desci/">DeSci</a>), Ethereum (not exclusively, there are also similar apps on other chains) is becoming the base layer for a range of potentially environmentally and socially net-positive applications.</p><p>There have also been several suggestions that blockchain backends can enable, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mattereum.com/circular-economy/">circular, less wasteful economies</a> using, for example NFTs and global markets to connect buyers and sellers, or preventing <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.edf.org/sites/default/files/documents/double-counting-handbook.pdf">double-spending of carbon offsets</a>.</p><p>It remains to be seen whether these nascent applications can generate real-world positive externalities in the way they intend.</p><h2 id="h-why-did-the-merge-take-so-long" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Why did the merge take so long?</h2><p>Ethereum was an aspirant PoS chain since its inception. However, it turned out to be a major technical challenge. This is because the Ethereum developer community wantd to ship proof-of-stake without compromising on security, scaleability and decentralization.</p><p>Initially, the development roadmap was <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://consensys.net/blog/blockchain-explained/the-roadmap-to-serenity-2/">divided into phases</a>, with scaling by sharding the blockchain happening before the merge to PoS. However, the rapid development of layer 2 “rollups” allowed Ethereum to scale sufficiently that the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698">merge was brought forwards</a> in time, before sharding.</p><p>New software clients had to be developed from the ground up to deal with the new security and consensus mechanisms. The community had to buy in to the idea by staking their ether in a deposit contract, even without a fixed date for withdrawals being enabled (i.e. staked ETH was locked until some unknown point in the future). Then, the new client implementations had to be extremely rigorously tested. This process took several years.</p><h2 id="h-summary" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Summary</h2><p>While Ethereum&apos;s energy consumption has historically been substantial, it transitioned from energy-hungry to energy-efficient block validation in September 2022. To quote <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://podcast.banklesshq.com/">Bankless</a>, the best way to reduce the energy consumed by proof-of-work is simply to &quot;turn it off&quot;. Ethereum did that in September 2022, reducing its energy consumption by &gt;99.98%.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/b1ca9b520dc4919d2719419b3a0289037568ab798359946b17c34f2075dee10d.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[DeSci: the case for decentralized science]]></title>
            <link>https://paragraph.com/@jmcook/desci-the-case-for-decentralized-science</link>
            <guid>jdc2s7wGkjHBzpMzBF4m</guid>
            <pubDate>Wed, 09 Feb 2022 12:07:20 GMT</pubDate>
            <description><![CDATA[This article was one of 7 winners of the Gitcoin Public Goods RFP October 21: https://gitcoin.co/blog/seeking-a-new-kind-of-public-good-closing-the-loop/ Science is the process of discovery. It powers technological advancement and enables us to navigate and manipulate our environment. Strong economic, ethical and pragmatic arguments can be made in favour of scientific knowledge being a public good. This is especially true now that damaging disinformation is especially rife yet novel discoveri...]]></description>
            <content:encoded><![CDATA[<p><em>This article was one of 7 winners of the Gitcoin Public Goods RFP October 21: </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gitcoin.co/blog/seeking-a-new-kind-of-public-good-closing-the-loop/"><em>https://gitcoin.co/blog/seeking-a-new-kind-of-public-good-closing-the-loop/</em></a></p><p>Science is the process of discovery. It powers technological advancement and enables us to navigate and manipulate our environment. Strong economic, ethical and pragmatic arguments can be made in favour of scientific knowledge being a public good. This is especially true now that damaging disinformation is especially rife yet novel discoveries will be needed to fight existential-level threats like pandemics and climate change. However, this knowledge comes from scientific research, and our current infrastructure for doing that is broken, from the initial allocation of funds right through to the eventual dissemination of results. The reasons for this are diverse but ultimately share a common thread: they are emergent phenomena of centralized control. In future, a stack of DeSci dapps could offer a more attractive model for altruistically driving, doing and disseminating scientific research. This essay will make the case for DeSci and suggest a potential roadmap for building out a DeSci stack.</p><h2 id="h-the-broken-tradsci-model" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The broken TradSci model</h2><p>Science is enabled by distribution of funds to individuals or groups who propose to complete some specific project. In almost all cases, written applications are scored by a small panel of individuals who might then interview shortlisted candidates prior to awarding funds to a successful few. This general model has a long history, but is also well-known to be vulnerable to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://journals.asm.org/doi/full/10.1128/mBio.00422-16">biases, politics and self-interest of the review panel</a>. There has been shown to be <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://elifesciences.org/articles/13323">no correlation between grant application scores and their eventual outcomes</a> indicating that review panels do a poor job of selecting high quality projects. The same proposals given to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://science.sciencemag.org/content/214/4523/881?ijkey=e1dc5b0bdf2dd9bda0dd16d00e39e2cc9c21a84e&amp;keytype2=tf_ipsecsha">different panels</a> have <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.pnas.or/">wildly different outcomes</a>, without even agreement on the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.pnas.org/content/115/12/2952">relative merits of the proposals</a>. These issues have been amplified as research funding has become more scarce over time, entrenching a “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process#1">funding crisis</a>“. Funders have increasingly <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.freethink.com/science/fixing-the-way-we-fund-science">favored “safe hands”, hindering the progression of new researchers</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process#1">stifling intellectually ambitious projects</a>. The effect has been to circulate money around an established pool of academics, while also creating a hyper-competitive funding landscape that incentivizes applicants to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://academic.oup.com/gigascience/article/8/6/giz053/5506490?login=true">over-promise and under-deliver</a>. There have been calls to replace the current system with, for example, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.vox.com/future-perfect/2019/1/18/18183939/science-funding-grant-lotteries-research">grant lotteries</a>. Overall, the current centralized funding model is deeply inefficient, entrenches perverse incentives and undermines the scientific progress it is supposed to promote.</p><p>Science publishing is <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.forbes.com/sites/madhukarpai/2020/11/30/how-prestige-journals-remain-elite-exclusive-and-exclusionary/">also famously problematic </a>in that it relies upon free labour from authors, reviewers and editors, then charges extortionate fees from authors to cover publication costs. The resulting article is then usually hidden behind paywalls so that readers pay to access knowledge that – in the case of nationally funded work – they have already paid for through taxation. Alternatively, authors stump up an inflated “open access fee” to make the article available to the public. This creates a two-level science publishing system – those that can afford to publish open-access in high impact journals (raising their chances of future grant capture and employment) and those that can’t (diminishing their chances of funding and promotion). While free and open-access platforms do exist in the form of pre-print servers (e.g. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/">ArXiv</a>) these platforms lack quality control mechanisms and do not generally track article-level metrics, meaning they are usually used only to publicize work prior to submission to a traditional publisher. SciHub also exists to make published papers free to access, but not legally, and only after the publishers have already taken their payment and wrapped the work in strict copyright legislation.</p><p>In order to publish in reputable science journals, articles undergo peer-review. The theory of peer review is that experts in an appropriate discipline examine the work to determine whether it meets the necessary standards to be released to the wider community. At its best, peer review is a constructive process of incremental improvements to a manuscript that strengthens the underlying science. However, all too often it is a heavily politicized process of gatekeeping that favours more powerful players. Peer review often entrenches dogma and is vulnerable to straightforward pettiness (search #reviewer2 on Twitter). “Open Review” has also been attempted many times, but this is wide open for <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blogs.lse.ac.uk/impactofsocialsciences/2014/11/07/controversy-of-post-publication-peer-review/">abuse from trolls</a>.</p><p>Ultimately these structural problems with science funding and publishing have arisen from centralized control – a small pool of power-players sculpt the scientific landscape and have created a deeply imbalanced industry from what should be a public good.</p><h2 id="h-desci-a-new-vision-for-science" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">DeSci :A new vision for science</h2><p>A decentralised model could be used to rewrite the rules of professional science. “DeSci” could allow communities to decide how to distribute funds, enabling funding of long term ambitious and intellectually adventurous projects, establishing direct connections between donors and researchers, even retro-actively funding or rewarding researchers for impactful discoveries, inventions and products that have proven to be valuable public goods. Part of the reason why “<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.theatlantic.com/technology/archive/2011/04/quote-ad-generation/349689/">the best minds of [a] generation are thinking about how to make people click ads</a>” is surely that they are properly incentivized to do so – DeSci offers the potential to radically alter how a new generation of great minds are incentivized to tackle major scientific and technological challenges. At the same time, science articles and associated data can be made truly open and accessible, free or almost-free to publish, subject to a completely open and perpetual community review process that dynamically updates the article’s, and by extension the author’s, credentials. To deliver this vision, developers will need a stack of DeSci dapps that connect a user’s scientific activity to their credibility and their unique value-proposition for funders. Such a system could generate a complex and interesting game theory that could stimulate the development of a healthier science infrastructure.</p><p>DeSci will necessarily be defined from the ground up, as multiple dapps with specific value-propositions emerge and co-evolve. However, there are some key features that seem likely to emerge, and some initial ideas that could steer the early development. For example, these seem like sensible a priori development guidelines:</p><ol><li><p>Grants are flexible, with no floor or ceiling on monetary value, end-dates or number of grants awarded, terms that can be updated and options for retroactive funding etc.</p></li><li><p>Articles are free or nearly-free to publish and then universally free to access.</p></li><li><p>Peer-review of grants and articles is a perpetual community activity</p></li><li><p>An individual’s grant, publication and reviewing credentials are quantified and stored on chain, and likely used to weight their individual contributions to votes and peer review.</p></li></ol><p>While DeSci dapps would likely be viewed with some unease among some areas of the scientific community to begin with – especially within my own field of environmental science where the energy consumption associated with PoW is justifiably likely to be a major barrier to adoption until “the merge” – there is also widespread discontent with the current system and an appetite for structural change that suggests a robust DeSci stack could become a popular way to fund and publish new research in the future.</p><h2 id="h-implementation" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Implementation</h2><p>An example of a DeSci system that conforms to these ideals might begin by replacing the primary currency of research scientists – published articles – with NFTs. These NFTs can be minted and owned by the authors, with the actual manuscript and associated code and datasets hosted on <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ipfs.io/">IPFS</a> (or another decentralised storage such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.arweave.org/">Arweave</a> or <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://filecoin.io/">Filecoin</a>). This NFT is then citable by other researchers when they mint their own NFTs, with the number of citations for each article tracked in the NFT smart contract. Other users can then decide to review the work. They give the paper a score which is aggregated with prior review scores to generate a “trust score” property of the NFT, which is publicly viewable. The aggregated trust score from all of a user’s NFTs, along with information about successful delivery of grants and other relevant activities can then be used to define their personal score. This score can then be used to weight the user’s reviews on other articles and grant applications. Distribution of funds can then occur by DAO-style voting, with additional weight given to voters with higher scores. Further criteria can also be established using thematic tags on the NFTs.</p><p>Meanwhile, science funding might well take inspiration from existing grant-awarding DAOs such as Uniswap and Gitcoin. Weighting of an individual’s reviewing power might follow, for example, a quadratic voting model, where their impact is determined by the square root of their total personal score. Funds could come from onboarding traditional funders into crypto (e.g. by allowing deposits of stablecoins into a contract which are then distributed to researchers) or by the DeSci platform generating funds independently, for example by pooling small publication fees into a community escrow which can be parked in a DeFi protocol to generate yield. The former case also relieves traditional funders of the burden of selecting successful candidates as this responsibility is devolved to the community, although they could impose some stipulations such as specific fields of interest etc.</p><h2 id="h-scifoundry" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">SciFoundry</h2><p>Some very early <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://www.github.com/jmcook1186/DeSci">“proof-of-concept” development </a>of a DeSci publishing platform called SciFoundry is underway. So far, SciFoundry is an ERC721 smart contract hosted on the Rinkeby testnet. Users can mint their articles and receive an NFT which can be viewed <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://testnets.opensea.io/assets/0xE77D0f2b83c558DDb10eD98fF100615Cca2FaF3d/0/">on Opensea</a>. Anyone can download the full article and its assets using the external URLs in the NFT metadata, or view the article’s citation and trust score metrics via the Opensea listing. When other researchers mint their own articles they provide a list of token IDs in place of a reference list. This is used by the contract to increment the citation counter of each cited article, updating their stats. Reviewers can also provide numeric scores (/100) and links to text comments by calling a simple function of the SciFoundry contract. The arithmetic mean of all the submitted review scores becomes the NFT’s “trust score” attribute. The project README has more details on the current functionality. This is just the most minimal possible demonstration but it hints at real potential for building out a DeSci stack.</p><h2 id="h-the-desci-community" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The DeSci Community</h2><p>Since first writing this article I have seen DeSci find a foothold in the wider Web3 world and discovered several flourishing communities, especially around open, decentralised data, patents and decentralised research funding, especially in the medical and biotech fields. Many of these have formed DAOs and some have launched tokens. Some of the organizations making waves in this space are Opscientia, OceanDAO, GenomesDAO, Radicle, labDAO and others.</p><h2 id="h-summary" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Summary</h2><p>This article has described the current dysfunctional centralized science infrastructure and provided a loose outline for how DeSci could begin to emerge from the shoulders of DeFi and NFTs. A DeSci stack could be a powerful tool for establishing a fairer and more efficient science landscape that deprioritizes the enrichment of middlemen and reprioritizes the distribution of knowledge as a fundamental public good.</p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
        <item>
            <title><![CDATA[Client diversity on Ethereum's consensus layer]]></title>
            <link>https://paragraph.com/@jmcook/client-diversity-on-ethereum-s-consensus-layer</link>
            <guid>Rf18zUfEr64X17XGiDTa</guid>
            <pubDate>Tue, 08 Feb 2022 17:04:25 GMT</pubDate>
            <description><![CDATA[This is a detailed accompaniment to an introductory article I wrote at ethereum.org Ethereum has multiple interoperable clients developed and maintained in different languages by independent teams. This is a major achievement and can provide resilience to the network by limiting the effects of a bug or attack to only the portion of the network running the affected client. However, this strength is only realized if users distribute roughly evenly across the available clients. At present, the v...]]></description>
            <content:encoded><![CDATA[<p><em>This is a detailed accompaniment to an introductory article I wrote at </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/developers/docs/nodes-and-clients/client-diversity/"><em>ethereum.org</em></a></p><p>Ethereum has multiple interoperable clients developed and maintained in different languages by independent teams. This is a major achievement and can provide resilience to the network by limiting the effects of a bug or attack to only the portion of the network running the affected client. However, this strength is only realized if users distribute roughly evenly across the available clients. At present, the vast majority of Ethereum nodes run a single client, inviting unnecessary risk to the network.</p><p>Ethereum will soon undergo one of the most significant upgrades to its architecture since its inception - the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/upgrades/merge/#">merge</a> from proof-of-work (PoW) to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/glossary/#pos">proof-of-stake (PoS)</a>. This will fundamentally change the way the network comes to consensus about the true state of the blockchain and network security is maintained. This new architecture brings security, scalability and sustainability benefits, but at the same time amplifies the risks associated with single-client dominance. This article will explore why…</p><h2 id="h-the-beacon-chain" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">The Beacon Chain</h2><p>The Beacon chain is a proof-of-stake (PoS) blockchain. It currently runs in parallel to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/glossary/#mainnet">Ethereum mainnet</a> but the two will soon be <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/upgrades/merge/">&quot;merged&quot;</a> together. The existing mainnet clients (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/glossary/#execution-client">&quot;execution clients&quot;</a>) will continue to host the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/glossary/#evm">Ethereum Virtual Machine (EVM)</a> and validate and broadcast transactions but will stop participating in proof-of-work (PoW) mining and relinquish responsibility for coming to consensus on the head of the blockchain. Instead, consensus will become the responsibility of <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/glossary/#consensus-client">“consensus clients”</a> that bundle transactions from execution clients together with information required for consensus into “Beacon Blocks” which then form the Beacon Chain. Miners will be replaced by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/glossary/#validator">&quot;validators&quot;</a> who deposit ether into an Ethereum smart contract (&quot;staking&quot;). This ether acts as collateral incentivizing good behavior. Inactivity or malicious behavior result in the burning of some portion of that staked ether. On the other hand, if a validator behaves appropriately, they are rewarded with ether payouts.</p><h3 id="h-validator-duties" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Validator Duties</h3><p>Good behavior for a validator means participating in validating Beacon Blocks that they receive from peers and voting on their view of the head of the chain. If the blocks they receive are valid, the validator &quot;attests&quot; to them, effectively voting for them to be added to the blockchain. Occasionally, a node will be required to propose a new block, which other validators can attest to. Where there are multiple forks of the blockchain, the one with the greatest accumulation of attestations over its history is identified as the correct one.</p><p>Occasionally, a validator will participate in a sync committee. A sync committee is a group of 512 randomly chosen validators that sign block headers so that light clients can retrieve validated blocks without having to access the full historical chain or the full validator set.</p><h3 id="h-justification-and-finality" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Justification and Finality</h3><p>The Beacon chain <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethos.dev/beacon-chain/%5D">sets the rhythm</a> for the network. This rhythm is organized into two units of time: slots and epochs. Slots are opportunities for blocks to be added to the beacon chain and they occur once every 12 seconds. Slots can go unfilled, but when the system is running optimally, blocks are added in every available slot. Epochs are units of 32 slots. Slots and epochs set the pace of the blockchain.</p><p>In each epoch, the block in the first slot is a checkpoint. These checkpoints are important because they are used to make sections of the blockchain permanent and irreversible. This is a two-stage process. First, if at least 2/3 of the total staked ether balance of all the active validators (&quot;supermajority&quot;) attest to the most recent pair of checkpoints (the current “target” and previous “source” checkpoints) then that section of the chain is “justified”. Justification is the first step towards permanent inclusion on the canonical blockchain. Once a justified checkpoint has another checkpoint justified on top of it it is &quot;finalized&quot;, making it permanent and irreversible.</p><p>This process of justification and finalization requires that validator attestations are actually a little more complex than previously suggested. There are two types of attestation. One is the LMD GHOST vote which attests to the head of the chain (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2003.03052.pdf%5D">LMD GHOST is the fork choice algorithm</a>). The second is an FFG vote that attests to pairs of checkpoints (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/1710.09437.pdf">FFG is the name of the &quot;finality gadget&quot;</a> that justifies and finalizes the chain). All validators make FFG votes for each checkpoint, a randomly chosen subset make LMD-GHOST votes in each slot.</p><h3 id="h-staking-rewards-penalties-and-slashing" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Staking rewards, penalties and slashing</h3><h4 id="h-rewards" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Rewards</h4><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://consensys.net/blog/codefi/rewards-and-penalties-on-ethereum-20-phase-0/#">Staked ether acts as collateral</a> incentivizing honest behavior of validators. This staked ether grows over time as validators are <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/rewards">rewarded</a> for their participation in securing the network. Validators receive attestation rewards when they make LMD-GHOST and FFG votes consistent with the majority of other validators. When validators are selected to be block proposers they get rewarded if their proposed block gets finalized. Block proposers can also increase their reward by including evidence of misbehavior by other validators in their proposed block. These rewards are the &quot;carrots&quot; that encourage validator honesty.</p><h4 id="h-penalties" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Penalties</h4><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/penalties">The &quot;sticks&quot;</a> take the form of various mechanisms for burning a small portion of a validator&apos;s staked ether. Attestation penalties are applied when a validator fails to submit an FFG vote, submits it late, or submits an incorrect vote. There is no penalty for missing LMD-GHOST votes except for the opportunity cost of missing the head vote reward. The validator balance is reduced by the same amount as they would be rewarded for a correct attestation. This means an honest but &quot;lazy&quot; validator that is maximally penalized for missed attestations loses 3/4 of the amount they would gain if they attested perfectly. When validators are assigned to sync committees they receive rewards for each slot they sign off. When validators in a sync committee fail to sign blocks they are penalized exactly the value of ether they would have received for signing successfully.</p><p>Overall these penalties are mild and amount to a very slow bleed of staked ether for continued inactivity.</p><h3 id="h-slashing" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Slashing</h3><p>Slashing is a more severe action that results in the forceful removal of a validator from the network and an associated loss of their staked ether. There are <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://consensys.net/blog/codefi/rewards-and-penalties-on-ethereum-20-phase-0/">three ways</a> a validator can be slashed, all of which amount to the dishonest proposal or attestation of blocks:</p><ol><li><p>By proposing and signing two different blocks for the same slot</p></li><li><p>By attesting to a block that &quot;surrounds&quot; another one (effectively changing history)</p></li><li><p>By &quot;double voting&quot; by attesting to two candidates for the same block</p></li></ol><p>If these actions are detected, the validator is slashed. This means that 1/64th of their staked ether (up to a maximum of 0.5 ether) is immediately burned, then a 36 day removal period begins. During this removal period the validators stake gradually bleeds away. At the mid-point (Day 18) an additional penalty is applied whose magnitude scales with the total staked ether of all slashed validators in the 36 days prior to slashing event. This means that when more validators are slashed, the magnitude of the slash increases. The maximum slash is the full effective balance of all slashed validators (i.e. if there are lots of validators being slashed they could lose their entire stake). On the other hand, a single, isolated slashing event only burns a small portion of the validator&apos;s stake. This midpoint penalty that scales with the number of slashed validators is called the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/slashing#the-correlation-penalty">&quot;correlation penalty&quot;</a>.</p><h3 id="h-inactivity-leak" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Inactivity Leak</h3><p>If the Beacon Chain has gone more than four epochs without finalizing, an emergency protocol called the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/inactivity#inactivity-leak">&quot;inactivity leak&quot;</a> is activated. The ultimate aim of the inactivity leak is to create the conditions required for the chain to recover finality. As explained above, finality requires a 2/3 majority of the total staked ether to agree on source and target checkpoints. If validators representing more than 1/3 of the total validators go offline or fail to submit correct attestations then it is not possible for a 2/3 supermajority to finalize checkpoints. The inactivity leak lets the stake belonging to the inactive validators gradually bleed away until they control less than 1/3 of the total stake, allowing the remaining active validators finalize the chain. However large the pool of inactive validators, the remaining active validators will eventually control &gt;2/3 of the stake. The loss of stake is a strong incentive for inactive validators to reactivate as soon as possible!</p><p>The reward, penalty and slashing design of the Beacon Chain encourages individual validators to behave correctly. However, from these design choices emerges a system that strongly incentivizes equal distribution of validators across multiple clients, and should strongly disincentivize single-client dominance. This arises because the supermajority is so fundamental to Beacon Chain consensus. A single bad validator is fairly benign, but a large group of bad validators can wreak havoc. Let’s examine some potential scenarios…</p><h2 id="h-client-diversity-risk-scenarios" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Client Diversity Risk Scenarios</h2><p>The asset incentivizing consensus client diversity is risk. With even distribution of validators across multiple clients the consequences of attacks or bugs that exploit specific clients is drastically reduced, whereas single-client dominance acts as a risk multiplier. This risk multiplication effect scales with the degree of network-share of the dominant client. We can get more intuition for this by examining some hypothetical (but realistic) scenarios. Let&apos;s assume a bug is accidentally introduced into a consensus client. This bug can either directly lead to incorrect attestations, or expose a vulnerability that allows a malicious attacker to force a client to make incorrect attestations. How does client diversity influence the consequences of such a bug?</p><h3 id="h-scenario-1-corrupted-client-has-less-than-13-total-staked-ether" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Scenario 1: corrupted client has less than 1/3 total staked ether</h3><p>This scenario confers maximum resilience to the Beacon Chain because 2/3 of the total staked ether is still making correct attestations allowing the Beacon Chain to finalize as normal. Therefore, from the network perspective the consequences are negligible. The affected validators suffer inactivity penalties because they submit incorrect attestations. These penalties are relatively minor and the affected validators can simply wait for the client to be fixed or switch to an alternative client. Either way, the validator can resume making correct attestations with minimal financial consequences and no disruption to the Beacon Chain.</p><h3 id="h-scenario-2-corrupted-client-has-greater-13-total-staked-ether" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Scenario 2: corrupted client has &gt; 1/3 total staked ether</h3><p>This scenario is far more problematic because less than 2/3 of the total staked ether is making correct attestations - there can be no supermajority. This means the Beacon Chain cannot finalize and the inactivity leak will be activated. The bug now has consequences for the network as a whole. Finality is critical for exchanges and apps built on top of Ethereum - without it there is no guarantee that transactions are permanent and irreversible. For individual validators using the affected client, the penalties are much more severe because of the inactivity leak - their stake is burned until the affected client controls &lt; 1/3 of the total staked ether. Only then can the Beacon Chain start finalizing again. The ether burn <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/inactivity#inactivity-penalties">can actually continue</a> for some time after the Beacon Chain recovers, providing a buffer against small changes in validator numbers flipping the state of the Beacon Chain from &quot;able to finalize&quot; to &quot;unable to finalize&quot;. Only when an affected client has more than 1/3 share of the total staked ether is Beacon Chain finalization in jeopardy.</p><p>Validators running correctly-functioning alternative clients receive no rewards during the inactivity leak. This is a security mechanism to prevent attackers from deliberately initiating the inactivity leak in order to raise the total rewards available to their correctly-operating validators. These are small penalties, but the point is that no-one escapes negative consequences from a consensus bug in a client with more than 1/3 of the total staked ether.</p><h3 id="h-scenario-3-corrupted-client-has-12-total-staked-ether" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Scenario 3: corrupted client has 1/2 total staked ether</h3><p>This scenario potentially leads to an unrecoverable fork in the Beacon Chain. If the client with a consensus bug forks off onto its own chain, neither the original nor the new fork would be able to finalize because both would be missing about half of their validators and would both activate the inactivity leak. The staked ether of the missing validators on each chain would burn until it amounted to &lt; 1/3 of the total staked ether, at which point some validators on each chain could start finalizing again. This would take about the same time on both forks because the amount of ether burning required to restore finalization would be about equal. Both forks would finalize independently with a different set of finalized checkpoints. The two forks could never be merged together into a single canonical chain. To remedy this would require social consensus from the Ethereum community about which is the canonical chain - a process sure to be politically awkward and divisive and leading to financial losses for about half the community as they switch chains (not including the likely devaluing of ether that could result from the market pricing in the drama). Perhaps worse, the community could simply stay divided (with similarities to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gemini.com/cryptopedia/the-dao-hack-makerdao">DAO fork that produced Ethereum Classic</a>).</p><p>To avoid a permanent split in the Beacon Chain, validators using the corrupted client would have to race the inactivity leak to switch or fix their client before the chain starts finalizing. There would probably be <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/diversity#3-client-x-has-around-half-of-the-stake">3-4 weeks available</a>, during which time developers would be rushing to save Ethereum. There is no escaping significant financial consequences for a large set of validators in this scenario.</p><h3 id="h-scenario-4-corrupted-client-has-greater-23-total-staked-ether" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Scenario 4: corrupted client has &gt; 2/3 total staked ether</h3><p>This is the nightmare scenario for the Beacon Chain because the corrupted client has a supermajority and is able to finalize its own chain. Incorrect information would then likely be cemented into Ethereum&apos;s history forever. There would be only about <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/diversity#4-client-x-approaches-or-exceeds-two-thirds-of-the-stake">13 minutes</a> for client teams to identify the bug, fix it and broadcast updates to the affected validators bug before the chain begins to finalize corrupted blocks.</p><p>The only viable mitigation to this situation is for the affected validators to withdraw their stakes and exit the chain. If the affected validators try to rejoin the correct chain after applying a fix they would be slashed with the maximum correlation penalty because they would now be attesting to checkpoints that contradict their previous attestations, and doing so <em>en masse</em>. The inactivity leak would be activated by the large validator exodus, meaning the affected validators would be continuously losing their staked ether while they are waiting in the exit queue. The large number of validators would make the queue long, slow and expensive.</p><p>The only other option is for the remaining non-affected clients to accept the bug, join the new chain and agree that the bug becomes the expected behavior of Ethereum&apos;s consensus layer from then on. This would run contrary to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.symphonious.net/2021/02/13/hard-truths-for-eth-stakers/">core principles</a> of the staking community and would be <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.symphonious.net/2021/09/23/what-happens-if-beacon-chain-consensus-fails/">extremely divisive</a>. Those minority clients would then be subject to inactivity penalties on the new chain even though they acted properly. Neither of these are good options. The former option is extremely expensive for the affected validators and logistically awkward to correct. The latter option would deeply undermine trust in Ethereum and cause us to accept a permanently tarnished chain.</p><h3 id="h-other-risks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Other risks</h3><h4 id="h-reverting-finality" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Reverting Finality</h4><p>Control of &gt;2/3 of the total staked ether gives power to the developers of a single client to choose which version of history is the right one. For example, if the developers turned malevolent they could spend some ether (cash it out via an exchange or bridge to another chain, for example) then collectively vote to replace the existing finalized chain with an alternative version that does not include their spend transaction. This is a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://en.wikipedia.org/wiki/Double-spending">&quot;double-spend&quot;</a> made possible by the client&apos;s supermajority that allows it to revert finality and overwrite history. Meanwhile the honest minority would be punished for their inconsistent attestations. A malevolent supermajority could also just threaten such action and hold the network to ransom. Even with just 1/3 of the stake a malevolent team could threaten to halt finality and activate the inactivity leak.</p><h4 id="h-shared-responsibility" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Shared responsibility</h4><p>The previous point took a somewhat pessimistic view of the client development teams, not because it was justified but because nefarious behavior is <em>possible</em> and therefore needs defending against. However, those same developers are most likely to always be good actors and they themselves require protection against single-client dominance, not only because they are likely to be Ethereum users (and ether holders/stakers) but because responsibility for the security of the network should not be concentrated on the shoulders of one small team. There is a real cost in the form of stress and mental health for developers whose actions bear outsized consequences on the health of Ethereum as a whole. Client diversity protects against this by sharing the responsibility across multiple independent teams.</p><h4 id="h-centralization" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Centralization</h4><p>Even when the development team comprise entirely well-intentioned actors, they still retain excessive power of the functioning of Ethereum when they control the majority of the staked ether. Decentralization is a core principle for Ethereum, and this must include developers as well as users and custodians. Decentralization of development teams across multiple clients that share equal proportions of the staked ether limits the power of a single team to make key decisions about, for example, the content and timing of forks, limiting their influence on the philosophical direction of Ethereum. Client diversity protects decentralized decision making at the developer level.</p><h4 id="h-politics" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0">Politics</h4><p>Social recovery of an honest chain is an issue fraught with politics. Ethereum&apos;s consensus mechanism should finalize based on the rules coded into its clients - that is it&apos;s primary aim. Intervening int hat process is likely to lead to schisms in the Etheruem community where various forks benefit or punish different pockets of the community, and various users likely have various points of view about the most philosophically, ethically and technically acceptable mitigation of a consensus bug/attack in a majority client. Governance decisions would be awkward, disruptive and likely too slow to be maximally effective.</p><h2 id="h-real-world-examples" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Real world examples</h2><p>The scenarios outlined above have relatively low probability of occurrence. Developers are meticulous in researching and testing each and every update to their software and there is no reason to doubt the integrity of any client teams - far from it. However, neither are the scenarios purely hypothetical. There have already been examples of client diversity rescuing Ethereum&apos;s mainnet from permanent damage, and examples of consensus bugs disrupting Ethereum testnets. Some of these examples are described below.</p><h3 id="h-shanghai" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Shanghai</h3><p>In September 2016, during the Shanghai DevCon conference, hackers were able to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.youtube.com/watch?v=nhr5nlMNvRQ">attack Ethereum</a>, exploiting several vulnerabilities in the client software causing the network to slow down dramatically. The attacker was persistent, rapidly deploying new similar attacks as client developers raced to reverse-engineer and patch them. Eventually the attacker found a vulnerability in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://geth.ethereum.org/">Geth</a> that could not be patched, necessitating a hard fork. Even after the hard fork, the attacker still found a denial-of-service vulnerability that used the bloated state hanging over from their previous attacks to force clients to make tens of thousands of slow disk i/o operations in each block. Client diversity won the day because, while developers fought to fix the vulnerabilities in Geth, Ethereum was able to continue using the alternative Parity clients, which did not suffer the same vulnerability.</p><p>The Shanghai attack was recoverable because there were multiple clients, but the situation could have been very different had a similar bug affected a majority consensus client. If a consensus client had the same dominance that Geth had at the time of the attack, the chain would not have been able to finalize as the vast majority of the validators would not have been attesting to blocks. The inactivity leak would have been activated because &lt; 1/3 of the total staked ether was available for attesting.</p><h3 id="h-insecura" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Insecura</h3><p>The viability of a &quot;long range attack&quot; was recently <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/@benjaminion/wnie2_220128">demonstrated on the Pyrmont testnet</a>. The idea was to establish a set of validators attesting to an alternate blockchain history. These validators were then used to trick new validators into joining the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethresear.ch/t/insecura-my-consensus-for-the-pyrmont-network/11833">dishonest “Insecura” chain</a>, gradually growing the population of compromised validators, eventually to the point of interrupting finality, activating the inactivity leak and draining the stake of the honest majority. Ultimately this could lead to the corrupted clients finalizing their version of the chain. As explained in <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://twitter.com/jcksie/status/1485574681688223745?s=20&amp;t=MHfEj5MdeTJpD-ZwQvQtNw">this thread</a>, the investment of time and money required makes this an unlikely attack vector, but the similar dynamics could lead a bug in a majority consensus client to infect a large proportion of the network and become finalized.</p><h3 id="h-medalla" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Medalla</h3><p>The Medalla testnet suffered a <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/@benjaminion/wnie2_200822#Medalla-Meltdown-redux">sudden drop in active validators</a> due to an issue relating to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.prylabs.network/docs/how-prysm-works/beacon-node/">Prysm</a> client&apos;s clocks. The chain was not able to finalize because so many validators dropped off the network that a 2/3 majority of staked ether was no longer available for attesting. Recovery was gradual, as it relied on validators switching clients from Prysm to minority clients. Later, real time caught up with the erroneous Prysm clock time and the previously invalid attestations suddenly became valid. This caused Prysm to stall while Teku and Lighthouse clients suffered massive state bloat as they processed the sudden glut of attestations. Had Prysm been the only client, the entire network would have stalled. Had Prysm had &lt;1/3 of the total staked ether, a lot of the chaos could have been averted.</p><h3 id="h-prysm-deposit-root-bug" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Prysm deposit root bug</h3><p>Early in 2021 Prysm <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/prysmaticlabs/prysm/issues/8298">suffered a bug</a> related to its validation of Eth1 deposit roots. Prysm clients were able to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://medium.com/prysmatic-labs/eth2-mainnet-incident-retrospective-f0338814340c">generate an invalid deposit root</a> and pass it to other Prysm nodes. Because Prysm had such a large validator share, this invalid root spread quickly through the network, accelerated by the way Prysm clients followed the majority vote rather than explicitly validating the deposit root in each block. The consequences of this bug were very minor - there was no interruption to the Beacon Chain finality nor any significant financial penalties to validators. However, the incident demonstrates the importance of client diversity in two ways. First, a smaller validator share would have limited the spread of the bug through the network, reducing its impact. Second, the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://medium.com/prysmatic-labs/eth2-mainnet-incident-retrospective-f0338814340c">post mortem</a> describes how alternative client implementations were used as benchmarks, helping the developers identify and fix the bug quickly. Of course, this would not be possible without multiple, actively maintained clients.</p><h2 id="h-client-diversity-today" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Client diversity today</h2><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/44bf46b62a77f3e9d93d7ee9716b90239ea5c435900a7df35c28f22181436ce8.jpg" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The two pie charts above show snapshots of the current client diversity for the execution and consensus layers (at time of writing in January 2022). The execution layer is overwhelmingly dominated by <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://geth.ethereum.org/">Geth</a>, with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openethereum.github.io/">Open Ethereum</a> a distant second, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/ledgerwatch/erigon">Erigon</a> third and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nethermind.io/">Nethermind</a> fourth, with other clients comprising less than 1 % of the network. The most commonly used client on the consensus layer - <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.prylabs.network/docs/how-prysm-works/beacon-node/">Prysm</a> - is not as dominant as Geth but still represents over 60% of the network. <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/sigp/lighthouse">Lighthouse</a> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.teku.consensys.net/en/latest/">Teku</a> make up about 20% and 14% respectively, and other clients are rarely used.</p><p>The execution layer data were obtained from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.ethernodes.org/">Ethernodes</a> on 23/01/22. Data for consensus clients was obtained from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/sigp/blockprint">Michael Sproul</a>. Consensus client data is more difficult to obtain because the Beacon Chain clients do not always have unambiguous traces that can be used to identify them. The data was generated using a classification algorithm that sometimes confuses some of the minority clients (see here for more details). In the diagram above, these ambiguous classifications are treated with an either/or label (e.g. Nimbus/Teku). Nevertheless, it is clear that the majority of the network is running Prysm. The data is a snapshot over a fixed set of blocks (in this case Beacon blocks in slots 2048001 to 2164916) and Prysm&apos;s dominance has sometimes been higher, exceeding 68%. Despite only being snapshots, the values in the diagram provide a good general sense of the current state of client diversity.</p><p>Execution layer diversity is included here because a bug affecting the execution clients can also propagate through to the consensus layer since, after the merge, the two will be coupled together with the execution payload generated by the execution clients being a core component of Beacon Blocks.</p><p>Up to date client diversity data for the consensus layer is now available at <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://clientdiversity.org/">https://clientdiversity.org/</a>.</p><h3 id="h-individual-stakers-and-staking-pools" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Individual stakers and staking pools</h3><p>Tackling the imbalance in client distribution requires action from the major exchanges and staking pools. However, individual stakers can still do their part by choosing to run non Geth/Prysm client combinations. Instructions for setting up minority clients can be found at <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://clientdiversity.org">clientdiversity.org</a>.</p><p>For stakers who have less than 32 ether or who do not wish to take on responsibility for running a validator, there are staking services available. Several of the major centralized exchanges offer ether staking, but the client distribution in their staking pools is often hidden, and there are limits of the tradeability of the staked ether tokens those exchanges offer. For <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://coinmarketcap.com/alexandria/article/liquid-staking-and-its-benefits-a-deep-dive-by-lido">these (and other) reasons</a>, these centralized services are not recommended. The better option is to use a more decentralized liquid staking service such as <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://lido.fi/">Lido</a> or <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://rocketpool.net/">Rocketpool</a>. These services stake ether and provide a token in return whose value increases over time as the pool’s validators accrue rewards. Those tokens can be traded or used to earn <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/defi/#what-is-defi">DeFi</a> yields. These liquid staking platforms are more transparent about their client distribution too, with Lido publishing <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://drive.google.com/file/d/1M9bOFalecnJf_pcYoxO7fWN4P1IH8PZ0/view">quarterly updates</a> and Rocketpool now <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://medium.com/rocket-pool/where-we-are-and-whats-to-come-7f5f932e9035">reporting theirs</a> too. For users unable or unwilling to run their own validator, these services are a route to contributing to better client diversity.</p><h2 id="h-summary" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Summary</h2><p>Client diversity is directly incentivized by the Beacon Chain&apos;s reward and penalty protocols. Single-client dominance is a hidden threat to Ethereum, invisible while the dominant client behaves faultlessly but potentially catastrophic when a consensus bug rears its head. Having multiple clients is a unique strength of Ethereum and a testament to the diligence of the developer community. However, that good work is undermined when one client controls a majority of the stake. The ideal scenario is equal distribution of staked ether across at least 4 clients, giving a maximum of 1/4 of the staked ether to each client. This is easily possible with the production-ready clients available today.</p><h2 id="h-links" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">Links</h2><h3 id="h-client-diversity" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Client diversity</h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethereum.org/en/developers/docs/nodes-and-clients/client-diversity/">https://ethereum.org/en/developers/docs/nodes-and-clients/client-diversity/</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://snakecharmers.ethereum.org/applying-the-five-whys-to-the-client-diversity-problem/">https://snakecharmers.ethereum.org/applying-the-five-whys-to-the-client-diversity-problem/</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://upgrading-ethereum.info/altair/part2/incentives/diversity#diversity">https://upgrading-ethereum.info/altair/part2/incentives/diversity#diversity</a></p><div data-type="youtube" videoId="Bd3b7PeKEB4">
      <div class="youtube-player" data-id="Bd3b7PeKEB4" style="background-image: url('https://i.ytimg.com/vi/Bd3b7PeKEB4/hqdefault.jpg'); background-size: cover; background-position: center">
        <a href="https://www.youtube.com/watch?v=Bd3b7PeKEB4">
          <img src="{{DOMAIN}}/editor/youtube/play.png" class="play"/>
        </a>
      </div></div><h3 id="h-beacon-chain" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0">Beacon Chain</h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://medium.com/chainsafe-systems/ethereum-2-0-a-complete-guide-casper-and-the-beacon-chain-be95129fc6c1">https://medium.com/chainsafe-systems/ethereum-2-0-a-complete-guide-casper-and-the-beacon-chain-be95129fc6c1</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://consensys.net/blog/codefi/rewards-and-penalties-on-ethereum-20-phase-0/">https://consensys.net/blog/codefi/rewards-and-penalties-on-ethereum-20-phase-0/</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/@benjaminion/wnie2_200822#Medalla-Meltdown-redux">https://hackmd.io/@benjaminion/wnie2_200822#Medalla-Meltdown-redux</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://pintail.xyz/posts/beacon-chain-validator-rewards/">https://pintail.xyz/posts/beacon-chain-validator-rewards/</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.symphonious.net/2021/09/23/what-happens-if-beacon-chain-consensus-fails/">https://www.symphonious.net/2021/09/23/what-happens-if-beacon-chain-consensus-fails/</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethos.dev/beacon-chain/">https://ethos.dev/beacon-chain/</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://our.status.im/two-point-oh-justification-and-finalization/">https://our.status.im/two-point-oh-justification-and-finalization/</a></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://arxiv.org/pdf/2003.03052.pdf">https://arxiv.org/pdf/2003.03052.pdf</a></p>]]></content:encoded>
            <author>jmcook@newsletter.paragraph.com (jmc)</author>
        </item>
    </channel>
</rss>