
The Merge series (part 1): Performance
Welcome to the Merge series!Part 1 of a 3 part blog post series, on some of the most interesting observations we were able to surface from our data, around The Merge. In this post, we dive deep in validator performance, as evidenced on-chain in the 90 days around The Merge.Key takeaways from part 1Overall network performance is high–albeit trending marginally down, with misses in terms of attesting to the correct head of the chain contributing to performance degradation the most.Home stakers ...

The Merge series (part 3): MEV landscape
Part 3 of a 3 part blog post series on the Ethereum Merge. In this post, we dive into mev-boost and its adoption, the builder and relay landscapes, and what this adoption and optimization towards MEV and rewards in general mean for the network. The analysis in this post spans from Aug 1 to Nov 15, 2022. Maximal Extractable Value (MEV) is the maximum value outside of standard block rewards and fees that can be extracted by privileged actors in the block production process. This extraction is d...

The Merge series (part 2): Rewards
Part 2 of a 3 part blog post series on the Ethereum Merge. In this post, we dive deep into validator rewards, as evidenced on-chain in the 90 days around The Merge and supplemented by off-chain sources such as data from MEV relays. Arguably the most important feature of the Merge is the fact that now validators get to process transactions by producing blocks from the execution layer of Ethereum and get rewarded for it. But exactly how much so would that be?Key takeaways from part 2Daily (tota...
Building reputation for machines | rated.network/home

The Merge series (part 1): Performance
Welcome to the Merge series!Part 1 of a 3 part blog post series, on some of the most interesting observations we were able to surface from our data, around The Merge. In this post, we dive deep in validator performance, as evidenced on-chain in the 90 days around The Merge.Key takeaways from part 1Overall network performance is high–albeit trending marginally down, with misses in terms of attesting to the correct head of the chain contributing to performance degradation the most.Home stakers ...

The Merge series (part 3): MEV landscape
Part 3 of a 3 part blog post series on the Ethereum Merge. In this post, we dive into mev-boost and its adoption, the builder and relay landscapes, and what this adoption and optimization towards MEV and rewards in general mean for the network. The analysis in this post spans from Aug 1 to Nov 15, 2022. Maximal Extractable Value (MEV) is the maximum value outside of standard block rewards and fees that can be extracted by privileged actors in the block production process. This extraction is d...

The Merge series (part 2): Rewards
Part 2 of a 3 part blog post series on the Ethereum Merge. In this post, we dive deep into validator rewards, as evidenced on-chain in the 90 days around The Merge and supplemented by off-chain sources such as data from MEV relays. Arguably the most important feature of the Merge is the fact that now validators get to process transactions by producing blocks from the execution layer of Ethereum and get rewarded for it. But exactly how much so would that be?Key takeaways from part 2Daily (tota...
Building reputation for machines | rated.network/home

Subscribe to Rated

Subscribe to Rated
Share Dialog
Share Dialog


<100 subscribers
<100 subscribers
🌐 Website
📘 Documentation
👐 Discord
💽 Github
Rated is an experiment in coordination. The experiment starts with the Beacon Chain validator community. At the moment, there is a strong undercurrent of disparity in how operators, researchers and other eth2 stakeholders measure the performance of individual validator indices and groups of indices.
We believe that this is a market failure, as we are all effectively looking at the same thing and yet we see it differently. Solving that coordination problem should unlock a lot of value for all involved.
To promote transparency on the Beacon Chain.
To serve as an enabler for the community to align in how validator performance should be measured.
To eventually become the enabler of new product creation for the eth2 community.
We’ve published a front-end at rated.network that we hope serves as another point of reference for transparency on the Beacon Chain. The front-end is powered by chaind and a Lighthouse node. On top of that we have done work on a v0 model of validator performance that appears on the front-end as “effectiveness rating”; you can find out more about how the model works in the documentation we have published.
In the coming weeks we will be releasing analyses we have worked on in support of the v0 effectiveness model descriptive and predictive capacity. Pretty soon we are also planning to release a free API for the community to be able to access the effectiveness scores and various other useful parameters around validator performance. We will also be open sourcing a bunch of the elements that make up Rated.
The v0 model we have published on rated.network is a good, robust model. But granted there is a fair amount of subjectivity in the way performance is recorded in the Beacon Chain, v0 is still our view. Our goal is to arrive at a v1 that bakes in the broader community’s view, and at the end of it all, offer the model, website and API as public good resources for anyone interested to be able to tap into.
We believe that taking the pain to go through the motions co-ordination ex-ante, will pay dividends for all of us down the line. Granted that the protocol is and will continually be subject to change at the consensus level, the way we define performance will naturally also change over time. By having a forum to discuss upcoming changes and an agreed upon path to upgrade the very definition of performance, we believe will help us achieve several higher order goals.
More pragmatically, we believe that alignment in how we measure validator performance–and standardising this metric, will lead to better insurance products, validator pool rewards mechanisms, new financial products, as well as helping move the cogs on client diversity.
Our immediate goal is to start a conversation around the v0 model; whether folks like the approach or not, what to add, what to remove, what to consider that we haven’t considered and so on. To take part, hop in our Discord here. After we go through those motions, we’ll look to kick a POAP powered signalling vote into gear–inviting relevant POAP holders (e.g. the Eth2.0 Serenity Launch POAP) to take part.
If after reading the above you find yourself scoring anywhere on the scale of intrigued to mission aligned, we would love for you to be involved! There are a few ways to do that:
Hop in the Discord here
Invite high value add folks that you think would be interested in the Discord
Read the documentation here
Participate in the discussion
Amplify the message
We look forward to having as many of you involved as possible.
Let’s Rate! 🍬
Rated v0 was created by @eliasimos and @ariskkol, with invaluable help and input from @0xjack_
🌐 Website
📘 Documentation
👐 Discord
💽 Github
Rated is an experiment in coordination. The experiment starts with the Beacon Chain validator community. At the moment, there is a strong undercurrent of disparity in how operators, researchers and other eth2 stakeholders measure the performance of individual validator indices and groups of indices.
We believe that this is a market failure, as we are all effectively looking at the same thing and yet we see it differently. Solving that coordination problem should unlock a lot of value for all involved.
To promote transparency on the Beacon Chain.
To serve as an enabler for the community to align in how validator performance should be measured.
To eventually become the enabler of new product creation for the eth2 community.
We’ve published a front-end at rated.network that we hope serves as another point of reference for transparency on the Beacon Chain. The front-end is powered by chaind and a Lighthouse node. On top of that we have done work on a v0 model of validator performance that appears on the front-end as “effectiveness rating”; you can find out more about how the model works in the documentation we have published.
In the coming weeks we will be releasing analyses we have worked on in support of the v0 effectiveness model descriptive and predictive capacity. Pretty soon we are also planning to release a free API for the community to be able to access the effectiveness scores and various other useful parameters around validator performance. We will also be open sourcing a bunch of the elements that make up Rated.
The v0 model we have published on rated.network is a good, robust model. But granted there is a fair amount of subjectivity in the way performance is recorded in the Beacon Chain, v0 is still our view. Our goal is to arrive at a v1 that bakes in the broader community’s view, and at the end of it all, offer the model, website and API as public good resources for anyone interested to be able to tap into.
We believe that taking the pain to go through the motions co-ordination ex-ante, will pay dividends for all of us down the line. Granted that the protocol is and will continually be subject to change at the consensus level, the way we define performance will naturally also change over time. By having a forum to discuss upcoming changes and an agreed upon path to upgrade the very definition of performance, we believe will help us achieve several higher order goals.
More pragmatically, we believe that alignment in how we measure validator performance–and standardising this metric, will lead to better insurance products, validator pool rewards mechanisms, new financial products, as well as helping move the cogs on client diversity.
Our immediate goal is to start a conversation around the v0 model; whether folks like the approach or not, what to add, what to remove, what to consider that we haven’t considered and so on. To take part, hop in our Discord here. After we go through those motions, we’ll look to kick a POAP powered signalling vote into gear–inviting relevant POAP holders (e.g. the Eth2.0 Serenity Launch POAP) to take part.
If after reading the above you find yourself scoring anywhere on the scale of intrigued to mission aligned, we would love for you to be involved! There are a few ways to do that:
Hop in the Discord here
Invite high value add folks that you think would be interested in the Discord
Read the documentation here
Participate in the discussion
Amplify the message
We look forward to having as many of you involved as possible.
Let’s Rate! 🍬
Rated v0 was created by @eliasimos and @ariskkol, with invaluable help and input from @0xjack_
No activity yet