Public Yapper leaderboards are killing InfoFi.
Projects are paying for attention but get noise instead. The current state of InfoFi turns creators into spammers.
Everyone’s tired of the timeline full of commodity content, and attention is not getting new eyeballs or real usage.
Here’s what needs to change:
I wasn’t a fan of public leaderboards during its launch, and I’m still not a fan now.
Mindshare and Yaps are different metrics, and I feel that mindshare can be gamed more easily:
The algo for mindshare is different from Yaps, and it trends towards rewarding vanity metrics for rankings.
Public leaderboards make these worse because everyone knows who’s ranking and how much to ‘grind’ to get a certain rank.
I still believe that the EigenLayer model worked best:
Back before Kaito was popular, EigenLayer used Kaito’s algo to find social accounts that talked about the project.
This is ideal because no one knew about the incentives, but they received tokens for talking about EigenLayer.
Granted, everyone was talking about them since they were hyped on the timeline.
But it shows how private leaderboards can award true believers, or those who a long-term aligned.
Projects would want more people to know about their incentives to generate more mindshare.
But public leaderboards encourage farming, and that’s why we’re in this current state.
The quality of a piece of content is hard to determine because it’s so subjective.
Some may say that Ardizor’s posts are valuable, but I would disagree.
My take is that a piece of content is high-quality if it’s valuable.
And it’s valuable if it solves a problem.
It’s even more effective if the piece is authentic.
Sh*tposting can also be quality as it solves a problem (rthe eader’s unhappiness) with humour.
In this current state, algos aren’t able to identify quality content.
And they shouldn’t rely on this either:
SEO, or ranking on the Google Search Page, is a perfect example of how an algo is gamed.
Everyone’s trying to get a higher ranking by using the best keywords or structure that the algo likes.
This usually results in content that’s not helpful to the user at all:
Some will try to make you stay on the page by putting the answer at the bottom
Others write a lot of nonsense that’s totally unrelated to your search term
I’ve stopped using Google for searches because it doesn’t help me get the answer I want.
And I see the same pattern in InfoFi where everyone’s looking to game the algo:
Keyword stuffing project names in their post
Writing a long post about a project but actually saying nothing (because it’s full of fluff)
Once there’s a known playbook for ranking on the leaderboards:
Everyone will use the same tactics and they basically sound like the same person.
A value indicator could be more helpful, and I see the only way this works is through a manual human review:
@sanctumso did this for their earnestness review, but it took a lot of manual hours to go through every post.
InfoFi algos can filter out posts that obviously don’t have any value, but the more subjective ones can undergo a manual review to filter out engagement farming posts.
But we can have another way to protect against low-quality posts:
Proof of Stake was used to incentivise good behaviour:
Validators have to stake ETH before they can validate transactions.
If they validate the wrong blocks:
They get slashed and their ETH is lost.
This ensures accountability and penalises bad behaviour, which is not present in InfoFi.
Anyone can say anything and won’t be penalised for spreading false rumours, FUD, or creating drama for no reason.
Or even for blatantly using AI to generate content.
If there’s something at stake, creators will be more cautious about what they say about a project.
I remember Kaito considering Yaps slashing, but I don’t think it’s out yet.
Till then, the algo needs to be smarter:
I’m not sure about other InfoFi algos, but Kaito seems to favour smart follower engagement.
This can be abused as some accounts farm engagement by promising higher leaderboard rankings if smaller accounts engage with their posts.
There needs to be more effective filters to remove low-quality content:
Engagement farming filters
AI-generated filters
Documentation summarisation filters
I hope that @xeetdotai’s algo will be more effective at removing such content.
The best content comes from accounts adding their own experiences to their writing, rather than generating hype without any substance.
And here’s the best way to identify this:
I said this back when the Yapper leaderboards just launched, and I believe this will be the future of airdrops:
Say what you do onchain. It’s really that simple.
Do cool things onchain, and share your results (even if they’re failures).
These would wipe out accounts that create noise and it’s a better identifier of real users who interact with their product and share their experiences.
This is something that @xeetdotai recognises, and I hope they can create something more effective than the current InfoFi projects.
There’s just too much noise and what’s more valuable is converting attention into long-term users.
So the current model is broken as those with the loudest (but not necessarily the most valuable) voice wins.
AI is coming for creators who create commodity content, but it cannot replicate the experiences you have when using a project (whether good or bad).
I believe these creators will eventually become irrelevant when AI can do the same thing but at scale.
To future-proof your social reputation:
The only way is to display your authentic self and combine it with your onchain profile.
I shared more of my thoughts on this here.
Whenever you’re ready, there are 2 ways I can help you:
Audience to Airdrop: Steal my playbook to build trust fast and earn social airdrops
Secure Airdrop Hunter: My flagship Web3 security course, learn how to protect your assets and onchain footprint while stopping hackers from draining your funds
FIP Crypto
Over 100 subscribers