
Subscribe to Slowsort

Subscribe to Slowsort
Share Dialog
Share Dialog


<100 subscribers
<100 subscribers
Reputation has always been a fascinating and controversial topic in the blockchain space. We do not mean the reputation of crypto-influencers and project founders who dump their bags on unsuspecting community members (although undeniably fascinating topic), but the topic of reputation as a proper research and engineering direction. There is certainly a lot of experimentation with different conceptual and practical reputation solutions: non-transferable tokens as a form of reputation (Soulbound tokens), DAO toolboxes for the evaluation of members’ contributions (Sourcecred, Coordinape), reputation to penalize censorship and other misbehaviour (relays in MEV-boost), and many more in the development. But then one might ask what is so controversial about reputation as a research topic? It turns out a lot of things are as we will see. And probably an even more intriguing question is why people are persistent in trying to ‘solve reputation in’ Web3? (Yes sure ‘Web3 ‘is too ambiguous and maybe we should just call it ‘crypto’, but you get what I mean here).
Why is reputation suddenly such a hot topic in blockchain circles. The answer, it seems, lies in the realization that tokenomics—once heralded as the revolutionary framework for decentralized networks—has its limitations. As blockchain systems have scaled and evolved, we’ve started bumping up against the hard limits of what tokens alone can achieve in terms of governance, incentive alignment, and community sustainability. This realization is what is driving the renewed focus on reputation systems as a solution. There is a growing appreciation in Web3 that token-based governance and incentive structures have hit the wall in a sense. Yes, we can keep adding token-based incentives to keep the flywheel going (some people would say it is ouroboros), but there are limits to what can be built on these mechanisms.
For one, decentralized governance based on freely tradable tokens inevitably oscillates between plutocracy and ochlocracy. Or, in Web3/Crypto terms, 🐋 ‘whale economy’ and 🦐 ‘shrimp economy.’ This means that either we have to contend with a few Whales (large token holders) dictating their will or rely on the wisdom of a crowd of smaller token holders, who are unfortunately often do not care or too easily impressionable (who remembers Wonderland drama).
But probably one of the biggest issues with token-based governance is the lack of long-term incentives. Token holders, motivated by financial gain, often prioritize short-term profits over the long-term health of a project. This misalignment has been starkly demonstrated in the rise and fall of algorithmic stablecoins, where speculative investors—sometimes called "mercenary capital"—swoop in, extract value, and move on without any real commitment to the ecosystem’s sustainability.
Against this background, reputation has an intuitive appeal of alternative social mechanics that are not based on simple profit-seeking. Unlike tokens, which can be bought and sold—and thus manipulated—reputation is something that must be earned. Something that can capture those kinds of participation that are based on merits and fairness. Something that we associate with cooperation, peer recognition, and social capital. Reputation, in that sense, could be considered a different type of incentive mechanism. It helps to attract those who do care about specific goals and values of the project and reward them with social recognition. It may seem a bit utopian, but we have some good examples of open-software projects working this way, such as early Wikipedia (sadly not anymore), early Linux, and even early Bitcoin.
Of course, the downside of this intuitive appeal is that the chase for utopian cooperation can obscure some ugly sides of reputation mechanisms. In some cases, it can just be a lazy way to deal with the problem, i.e., we have misbehaving nodes in a system, so let’s slap on identity and reputation to punish them. Treating reputation like a magic box that will solve all problems leads to hilarious and deeply disturbing results. For example, ‘decentralized’ derivatives exchange asks users for biometric verification that will be ‘safely stored with our trusted partner.’ This one is more on the hilarious side, for sure. But then we also see ‘biometric identification hardware’ solutions to scan your yes and prove you are a real human. Purportedly developed to fight Sybil attacks in ‘open permissionless’ systems, those go far in the direction of disturbing and wrong things.
Speaking of which, Sybil attacks partially explain the interest in reputation mechanisms. To be fair, Sybils are a serious problem in blockchain or any other decentralized system. However, it is not always clear if we want a reputation to prevent Sybils, or do we want to prevent Sybil attacks in order to build a reliable reputation mechanism. The answer is: it can be both. There is no single ‘Sybil attack’ but rather a general approach of creating fake participants - Sybils - in order to exploit or disrupt the system on different layers.
The problem is that apart from locking the system to a limited number of known participants, it is very hard to solve this problem completely. The alternative is to bind the behaviour of system participants by the rules of protocol, e.g., using some form of proof or accounting for ‘good’ behaviour. While we are on the topic of ‘good contributions’, it is helpful to keep in mind that ‘good’ and ‘bad’ terminology should not be taken too seriously. First of all, this is an abstraction that refers to the questions of participants behave in accordance with protocol specifications. Whether these specifications reflect good behaviour in a broad sense is often out of scope.
Secondly, social appraisal and reputation are very, very context-specific and cannot always be grasped by the simple formalized metrics. While it might be tempting to translate metrics that work in some context into another contribution context, they often end up being completely divorced from reality. This is not about tradeoffs but about the poor practice of applying the wrong tools to the wrong problems (also, this is how you get the monstrosity of social-credit scores). Finally, different contexts can provide radically different interpretations of the same contribution. So for example contributors to financial privacy software are worthy of high moral praise. Yet they can be painted as money launderers in public discourse by malicious entities that have enough power over public opinion. So safe to say that there are no good reasons to propose universal on-chain identity and reputation.
Still, reputation mechanisms can be quite useful in a specific, well-defined context. However, to ripe the fruits of reputation mechanics without slipping into social-credit style dystopia we need to disentangle reputation from identity and the pipe dream of Sybli resistance. This may seem counter-intuitive at first, but a functioning reputation mechanism, in fact, does not need a strong or persistent identity mechanism. Nor do we need a strong identity to address the problem of Sybil attacks in a satisfactory manner. The breakthrough significance of Nakamoto consensus, by the way, is that it side-steps these problems so elegantly. It does not matter who you are, anyone can join and update the ledger, but the damage that participants can do is bounded. So can we have a decentralized and permissionless reputation in the same spirit?
Thanks to 30 years of P2P research (as research in distributed systems not just p2p networking layer), we know all mechanisms to tackle Sybils, and misbehaving participants in decentralized networks have certain tradeoffs, going beyond privacy problems. These tradeoffs can be presented as a trilemma (everybody loves them, right?) or a design space where the choice of instruments limits the available space of a triangle. It is a viable representation given that most well-known reputation tools broadly fall into three categories: cryptographic proofs, trusted oracles, and social feedback.
As a general concept, cryptographic proof solutions were proposed as early as the 1980s. Bounding the behaviour of system participants through cryptographic proofs is a compelling concept but, unfortunately, not very generalizable when we try to use it as proof of 'good contributions'.
For instance, consider a DAO where all contributions of participants and cryptographically proven and independently verifiable for anyone; it starts looking a bit strange. For one proving in a trustless way that a contribution was made is very hard. This works only with a narrow type of resource contributions, such as proof of storage, proof of computing, proof of bandwidth, and etc. The second tricky issue is that there is not always an easy way to generate, share, and agree on the proofs in a trustless way globally.
Thus, to ensure that we account for all contributions of DAO participants in such a fashion, we need some very weird solutions. E.g., some key-logging software on a machine of a DAO participant that records all your contributions, signs them cryptographically and uploads them to a public ledger. There are, of course, other ways to account for the 'good' and 'bad' behaviour in a system that does not require such cumbersome logic.
The use of trusted oracles also has some serious limitations. If we introduce trusted oracles to account for participants' behaviour, this puts a hard limit on how permissionless the system can be. And if we try to decentralize oracles, we end up either with prediction markets, or we move in the direction of the third type of mechanism: social networks feedback.
Social feedback or social network feedback are mechanisms where peers evaluate each others' contributions and exchange this information as a P2P network. They got initially popular with the success of social networks in Web2, and maybe also got mixed up with problems inherent to social media networks. However, social feedback mechanisms have some peculiar properties that make them worthy of a second look. These are very flexible methods that can be adapted to different types of contributions, as peers can determine the semantics of feedback themselves. DAO participants, for example, can endorse their peers for any kind of contribution. Secondly, it can be as decentralized and permissionless as we want. This information can be shared and aggregated between participants in the P2P network.
Yes, there are too many trilemmas in distributed systems. But sometimes employing an overused concept can be actually good since we are familiar with it. So let's try to consider the design of decentralized reputation as a trade-off design space that sort of looks like a trilemma. In general, we are limited by cryptographic proofs, trusted oracles, and social feedback mechanisms which represent different points on a triangle, each with its own strengths and weaknesses.

Cryptographic proofs offer a compelling solution for certain types of resource contributions, such as proof of storage, computation, or bandwidth. These proofs are independently verifiable and can be generated and shared in a trustless manner. However, their applicability to broader reputation systems is limited. Proving that a contribution has been made, in general, is challenging, especially when the contribution is abstract or subjective. Moreover, generating, sharing, and agreeing on these proofs globally in a trustless way is difficult, making cryptographic proofs a narrow solution for reputation systems.
Trusted oracles provide an alternative approach, where external entities validate contributions and feed this information into the reputation system. However, the use of trusted oracles introduces centralization, which contradicts the permissionless nature of decentralized systems. Decentralizing oracles is possible, but this often leads to reliance on prediction markets, which have limited applicability beyond specific use cases.
Social feedback, as discussed earlier, offers a decentralized and flexible approach to reputation but comes with its own set of challenges. While it allows for context-specific evaluations and can be adapted to different types of contributions, social feedback mechanisms are prone to biases and can be easily manipulated.
Thanks to massive marking campaigns by projects that want to scan everyone's eyeballs current discourse around reputation and credentials somehow holds uncritically that Sybil resistance is an absolutely necessary and good thing. But is it really?
Full Sybil-resistance as it was said before is a sort of pipe dream because in reality any sort of mechanism trying to prove that somebody is not a Sybil (not a bot, not an AI), is just pushing the problem to a different layer. For if we assume that only people with scanned eyeballs can participate in DAO with high monetary rewards, rest assured there will be a market where one can buy verified users cheaply in bulk. So this amounts to just pushing the problem away to another socio-economic layer. This, by the way, again highlights the questionable purpose of surveillance-style 'proof of real human solutions. Or consider that it is always possible to organize off-channel collusion between different participants to achieve the same goals as Sybil's attack could achieve. It is a very strong and plausible conjecture then that fully guaranteed Sybil resistance is neither possible nor necessary in real-life decentralized systems.
What we care about in most cases, though, is not the absence of Sybils but that attacks of these types do not break or undermine the functionality of a system. Say if the reputation mechanism is used to distribute rewards, Sybil's attack will be bounded so that the attacker can get some threshold-bounds resources. This brings us to the question of Sybil-tolerance: Can our system tolerate these types of attacks? And it turns out that we can indeed design a social feedback mechanism that is Sybil-tolerant.
Social feedback mechanisms intuitively translate well into collaboration schemas, with resource distribution based on contributions' merits. Many social arrangements have been made in the same way. So we can try to translate these principles into the context of DAO applications as well. E.g., peers can provide feedback to each other resulting in a reputation ranking that can be used to distribute rewards from DAO treasury proportionally to accrued reputation (see Meritrank for example).
So maybe one promising path forward for blockchain reputation systems is to embrace flexible, context-sensitive mechanisms that prioritize Sybil tolerance over strict Sybil resistance. For one it helps us to avoid the curse of persistent identity with its pandora’s box of intrusive surveillance, and China-style social credit scores. By focusing on minimizing the impact of Sybil attacks rather than attempting to eliminate them entirely, we can develop reputation systems that preserve the privacy and decentralized nature of blockchain networks while maintaining their integrity and effectiveness. And it is entirely possible that we will find some other trade-offs that will be more or less acceptable in specific use cases. But what it really shows is that with a decentralized reputation, nothing is as easy to create more problems as to solve existing ones. And to get there we really need to think out of the box.
Reputation has always been a fascinating and controversial topic in the blockchain space. We do not mean the reputation of crypto-influencers and project founders who dump their bags on unsuspecting community members (although undeniably fascinating topic), but the topic of reputation as a proper research and engineering direction. There is certainly a lot of experimentation with different conceptual and practical reputation solutions: non-transferable tokens as a form of reputation (Soulbound tokens), DAO toolboxes for the evaluation of members’ contributions (Sourcecred, Coordinape), reputation to penalize censorship and other misbehaviour (relays in MEV-boost), and many more in the development. But then one might ask what is so controversial about reputation as a research topic? It turns out a lot of things are as we will see. And probably an even more intriguing question is why people are persistent in trying to ‘solve reputation in’ Web3? (Yes sure ‘Web3 ‘is too ambiguous and maybe we should just call it ‘crypto’, but you get what I mean here).
Why is reputation suddenly such a hot topic in blockchain circles. The answer, it seems, lies in the realization that tokenomics—once heralded as the revolutionary framework for decentralized networks—has its limitations. As blockchain systems have scaled and evolved, we’ve started bumping up against the hard limits of what tokens alone can achieve in terms of governance, incentive alignment, and community sustainability. This realization is what is driving the renewed focus on reputation systems as a solution. There is a growing appreciation in Web3 that token-based governance and incentive structures have hit the wall in a sense. Yes, we can keep adding token-based incentives to keep the flywheel going (some people would say it is ouroboros), but there are limits to what can be built on these mechanisms.
For one, decentralized governance based on freely tradable tokens inevitably oscillates between plutocracy and ochlocracy. Or, in Web3/Crypto terms, 🐋 ‘whale economy’ and 🦐 ‘shrimp economy.’ This means that either we have to contend with a few Whales (large token holders) dictating their will or rely on the wisdom of a crowd of smaller token holders, who are unfortunately often do not care or too easily impressionable (who remembers Wonderland drama).
But probably one of the biggest issues with token-based governance is the lack of long-term incentives. Token holders, motivated by financial gain, often prioritize short-term profits over the long-term health of a project. This misalignment has been starkly demonstrated in the rise and fall of algorithmic stablecoins, where speculative investors—sometimes called "mercenary capital"—swoop in, extract value, and move on without any real commitment to the ecosystem’s sustainability.
Against this background, reputation has an intuitive appeal of alternative social mechanics that are not based on simple profit-seeking. Unlike tokens, which can be bought and sold—and thus manipulated—reputation is something that must be earned. Something that can capture those kinds of participation that are based on merits and fairness. Something that we associate with cooperation, peer recognition, and social capital. Reputation, in that sense, could be considered a different type of incentive mechanism. It helps to attract those who do care about specific goals and values of the project and reward them with social recognition. It may seem a bit utopian, but we have some good examples of open-software projects working this way, such as early Wikipedia (sadly not anymore), early Linux, and even early Bitcoin.
Of course, the downside of this intuitive appeal is that the chase for utopian cooperation can obscure some ugly sides of reputation mechanisms. In some cases, it can just be a lazy way to deal with the problem, i.e., we have misbehaving nodes in a system, so let’s slap on identity and reputation to punish them. Treating reputation like a magic box that will solve all problems leads to hilarious and deeply disturbing results. For example, ‘decentralized’ derivatives exchange asks users for biometric verification that will be ‘safely stored with our trusted partner.’ This one is more on the hilarious side, for sure. But then we also see ‘biometric identification hardware’ solutions to scan your yes and prove you are a real human. Purportedly developed to fight Sybil attacks in ‘open permissionless’ systems, those go far in the direction of disturbing and wrong things.
Speaking of which, Sybil attacks partially explain the interest in reputation mechanisms. To be fair, Sybils are a serious problem in blockchain or any other decentralized system. However, it is not always clear if we want a reputation to prevent Sybils, or do we want to prevent Sybil attacks in order to build a reliable reputation mechanism. The answer is: it can be both. There is no single ‘Sybil attack’ but rather a general approach of creating fake participants - Sybils - in order to exploit or disrupt the system on different layers.
The problem is that apart from locking the system to a limited number of known participants, it is very hard to solve this problem completely. The alternative is to bind the behaviour of system participants by the rules of protocol, e.g., using some form of proof or accounting for ‘good’ behaviour. While we are on the topic of ‘good contributions’, it is helpful to keep in mind that ‘good’ and ‘bad’ terminology should not be taken too seriously. First of all, this is an abstraction that refers to the questions of participants behave in accordance with protocol specifications. Whether these specifications reflect good behaviour in a broad sense is often out of scope.
Secondly, social appraisal and reputation are very, very context-specific and cannot always be grasped by the simple formalized metrics. While it might be tempting to translate metrics that work in some context into another contribution context, they often end up being completely divorced from reality. This is not about tradeoffs but about the poor practice of applying the wrong tools to the wrong problems (also, this is how you get the monstrosity of social-credit scores). Finally, different contexts can provide radically different interpretations of the same contribution. So for example contributors to financial privacy software are worthy of high moral praise. Yet they can be painted as money launderers in public discourse by malicious entities that have enough power over public opinion. So safe to say that there are no good reasons to propose universal on-chain identity and reputation.
Still, reputation mechanisms can be quite useful in a specific, well-defined context. However, to ripe the fruits of reputation mechanics without slipping into social-credit style dystopia we need to disentangle reputation from identity and the pipe dream of Sybli resistance. This may seem counter-intuitive at first, but a functioning reputation mechanism, in fact, does not need a strong or persistent identity mechanism. Nor do we need a strong identity to address the problem of Sybil attacks in a satisfactory manner. The breakthrough significance of Nakamoto consensus, by the way, is that it side-steps these problems so elegantly. It does not matter who you are, anyone can join and update the ledger, but the damage that participants can do is bounded. So can we have a decentralized and permissionless reputation in the same spirit?
Thanks to 30 years of P2P research (as research in distributed systems not just p2p networking layer), we know all mechanisms to tackle Sybils, and misbehaving participants in decentralized networks have certain tradeoffs, going beyond privacy problems. These tradeoffs can be presented as a trilemma (everybody loves them, right?) or a design space where the choice of instruments limits the available space of a triangle. It is a viable representation given that most well-known reputation tools broadly fall into three categories: cryptographic proofs, trusted oracles, and social feedback.
As a general concept, cryptographic proof solutions were proposed as early as the 1980s. Bounding the behaviour of system participants through cryptographic proofs is a compelling concept but, unfortunately, not very generalizable when we try to use it as proof of 'good contributions'.
For instance, consider a DAO where all contributions of participants and cryptographically proven and independently verifiable for anyone; it starts looking a bit strange. For one proving in a trustless way that a contribution was made is very hard. This works only with a narrow type of resource contributions, such as proof of storage, proof of computing, proof of bandwidth, and etc. The second tricky issue is that there is not always an easy way to generate, share, and agree on the proofs in a trustless way globally.
Thus, to ensure that we account for all contributions of DAO participants in such a fashion, we need some very weird solutions. E.g., some key-logging software on a machine of a DAO participant that records all your contributions, signs them cryptographically and uploads them to a public ledger. There are, of course, other ways to account for the 'good' and 'bad' behaviour in a system that does not require such cumbersome logic.
The use of trusted oracles also has some serious limitations. If we introduce trusted oracles to account for participants' behaviour, this puts a hard limit on how permissionless the system can be. And if we try to decentralize oracles, we end up either with prediction markets, or we move in the direction of the third type of mechanism: social networks feedback.
Social feedback or social network feedback are mechanisms where peers evaluate each others' contributions and exchange this information as a P2P network. They got initially popular with the success of social networks in Web2, and maybe also got mixed up with problems inherent to social media networks. However, social feedback mechanisms have some peculiar properties that make them worthy of a second look. These are very flexible methods that can be adapted to different types of contributions, as peers can determine the semantics of feedback themselves. DAO participants, for example, can endorse their peers for any kind of contribution. Secondly, it can be as decentralized and permissionless as we want. This information can be shared and aggregated between participants in the P2P network.
Yes, there are too many trilemmas in distributed systems. But sometimes employing an overused concept can be actually good since we are familiar with it. So let's try to consider the design of decentralized reputation as a trade-off design space that sort of looks like a trilemma. In general, we are limited by cryptographic proofs, trusted oracles, and social feedback mechanisms which represent different points on a triangle, each with its own strengths and weaknesses.

Cryptographic proofs offer a compelling solution for certain types of resource contributions, such as proof of storage, computation, or bandwidth. These proofs are independently verifiable and can be generated and shared in a trustless manner. However, their applicability to broader reputation systems is limited. Proving that a contribution has been made, in general, is challenging, especially when the contribution is abstract or subjective. Moreover, generating, sharing, and agreeing on these proofs globally in a trustless way is difficult, making cryptographic proofs a narrow solution for reputation systems.
Trusted oracles provide an alternative approach, where external entities validate contributions and feed this information into the reputation system. However, the use of trusted oracles introduces centralization, which contradicts the permissionless nature of decentralized systems. Decentralizing oracles is possible, but this often leads to reliance on prediction markets, which have limited applicability beyond specific use cases.
Social feedback, as discussed earlier, offers a decentralized and flexible approach to reputation but comes with its own set of challenges. While it allows for context-specific evaluations and can be adapted to different types of contributions, social feedback mechanisms are prone to biases and can be easily manipulated.
Thanks to massive marking campaigns by projects that want to scan everyone's eyeballs current discourse around reputation and credentials somehow holds uncritically that Sybil resistance is an absolutely necessary and good thing. But is it really?
Full Sybil-resistance as it was said before is a sort of pipe dream because in reality any sort of mechanism trying to prove that somebody is not a Sybil (not a bot, not an AI), is just pushing the problem to a different layer. For if we assume that only people with scanned eyeballs can participate in DAO with high monetary rewards, rest assured there will be a market where one can buy verified users cheaply in bulk. So this amounts to just pushing the problem away to another socio-economic layer. This, by the way, again highlights the questionable purpose of surveillance-style 'proof of real human solutions. Or consider that it is always possible to organize off-channel collusion between different participants to achieve the same goals as Sybil's attack could achieve. It is a very strong and plausible conjecture then that fully guaranteed Sybil resistance is neither possible nor necessary in real-life decentralized systems.
What we care about in most cases, though, is not the absence of Sybils but that attacks of these types do not break or undermine the functionality of a system. Say if the reputation mechanism is used to distribute rewards, Sybil's attack will be bounded so that the attacker can get some threshold-bounds resources. This brings us to the question of Sybil-tolerance: Can our system tolerate these types of attacks? And it turns out that we can indeed design a social feedback mechanism that is Sybil-tolerant.
Social feedback mechanisms intuitively translate well into collaboration schemas, with resource distribution based on contributions' merits. Many social arrangements have been made in the same way. So we can try to translate these principles into the context of DAO applications as well. E.g., peers can provide feedback to each other resulting in a reputation ranking that can be used to distribute rewards from DAO treasury proportionally to accrued reputation (see Meritrank for example).
So maybe one promising path forward for blockchain reputation systems is to embrace flexible, context-sensitive mechanisms that prioritize Sybil tolerance over strict Sybil resistance. For one it helps us to avoid the curse of persistent identity with its pandora’s box of intrusive surveillance, and China-style social credit scores. By focusing on minimizing the impact of Sybil attacks rather than attempting to eliminate them entirely, we can develop reputation systems that preserve the privacy and decentralized nature of blockchain networks while maintaining their integrity and effectiveness. And it is entirely possible that we will find some other trade-offs that will be more or less acceptable in specific use cases. But what it really shows is that with a decentralized reputation, nothing is as easy to create more problems as to solve existing ones. And to get there we really need to think out of the box.
No activity yet