

Share Dialog
Share Dialog
Subscribe to Chronicles of the Infra World
Subscribe to Chronicles of the Infra World
<100 subscribers
<100 subscribers
It's hard to keep up with how fast the ecosystem moves! Plus, there is nothing that distills exactly what we need as Node Runners! This is why Chronicles of the Infra World has come to exist: a weekly recap for Node Runners, the Dappnode team, and users to know what's going on in the Infrastructure world and what to do. Subscribe to get your weekly rundown of what's important to you as a Node Runner.
Each week highlights the most important of the week, including new ROI opportunities, protocols, trends, etc. Today's newsletter will focus on:
Ethereum
Pectra
Croissant Issuance
36M Gas Blocks
Permissionless CSM
Obol Airdrop, RAF, and Delegates
Smooth Upgrades
AI
DeepSeek R1
Venice AI Airdrop
Lots happening in the Ethereum world, as usual. Everything except numba up.
Dates:
Holesky |
|
Mainnet | Expected mid-March. No date yet |
The only thing you must do is have your client implementations updated to their Pectra-compatible versions. There are some other things that you can do:
Pectra also brings an optional change on how to manage your Withdrawal Credentials and accumulate rewards. If you want, you will be able to:
Compound rewards in your validator instead of withdrawing automatically everything above 32 ETH, up to 2048 ETH in 1 validator.
Consolidate several validators into one, up to a 2048 ETH in one validator.
If you decide to compound in a validator, whenever you want to withdraw any balance between 32 and 2048 ETH, it will require a manual transaction. After 2048 ETH, automatic withdrawals will work like any balance above 32 ETH nowadays.
I wrote a Twitter thread explaining EIP 7251 in more detail:
For us Node Runners, apart from being able to consolidate into a bigger validator - if you're lucky to have more than one - and simplify key management, it is possible that we also see a reduction on CPU and bandwidth utilization in our machines: If consolidation happens and we see a lower number of validators, this consolidation reduces the volume of consensus messages and attestations broadcasted across the network (less bandwidth). Fewer validators also mean that nodes process fewer attestations and related messages, which can decrease the CPU load on nodes. You could see it as "making Ethereum more efficient" as a whole, eliminating overhead on processing extra validators.
That said, bandwidth change might end up not changing much because EIP-7691 will increase the number of target Blobs from 3 (target) to 6 (max), which might increase the bandwidth.
I've mentioned the EIPs that will affect Node Runners and Home Stakers, but Pectra comes with a bunch of stuff.
• Official meta-EIP (tracker of all EIPs included in Pectra)
• Sassal's Pectra rundown:
Changing Ethereum's issuance is something we're extremely concerned about. While the moneyness of Ethereum is important, it can radically change the landscape of our operations in the Infra world.
Justin Drake dropped a "croissant issuance" proposal:
The proposal consists of:
An arbitrary cap on issuance at an arbitrary target stake - for example 1% issuance at 25% staked
A soft cap at, for example, 50% stake, where there is no issuance. None. Only Execution Layer rewards (EL: tx fees / MEV rewards + potentially commit boost-preconfirmations)
1% issuance doesn't mean 1% APR. With current supply of 120,528,666 ETH, 25% staked would be 30M ETH, and 1% issuance of the supply would be 1.2M ETH, which divided between this 30M would give us around 4% APR on issuance, before EL rewards.
For reference, in our current curve and present day we are at 33M ETH staked - slightly above the 25% target- with an issuance APR of 2.9%.
So the croissant issuance with these mock parameters would actually be very similar, if not better, at rewarding validators at the current level of staking, but it would heavily disincentivize new entrants.
I want to do a deep exploration of the economic forces that would affect the staking decisions (got some serious readings here and here) but it's beyond the scope of this quick weekly recap. Suffice to say that for Home Stakers, their APR would be at the mercy of more forces, and therefore potentially more variable. Right now it only goes down marginally when more ETH is staked. Basically -in the current situation - if you do the maths and is worth staking for you now, it will probably still be in 6 months, and your reaction time is in the months ballpark before the economic circumstances change dramatically. Croissant staking might make you manage your validators more actively, walking the line between opex and issuance changes.
IDK, could be Justin trolling tho.
About 50% of the validators are signaling for blocks bigger than 36M gas!
Why would we want to do that? See table below 👇
Pros | Cons |
✅ Higher throughput (more gas, more stuff fits in the block) Lower gas fees Scaling L1. Not only L2 scaling! | ❌ can increase hardware requirements to process bigger blocks If HW requirements increase, can disincentivise home stakers and increase centralization |
The current increase from 30 to 36M gas seems to be safe and many Home Stakers have signaled for the change. More info at pumpthegas.org
You can now run a validator from 2.4 ETH, and get the rest of the 29.5 ETH from Lido. This is a great way to:
get started staking if you don't have 32 ETH
maximize the utilisation of your node adding "cheap" validators on top of your existing setup
Bear in mind that the APR will probably be lower than this 7.6% (but still higher than the 3% for regular validators) as Early Access validators had a different bond requirement than the permissionless version.
There are several ways to validate with less than 32 ETH: from creating a Vault in Stakewise to gathering with some friends to do a DVT validator to using Rocketpool. All of them are very valid options, but often might run up against the liquidity problem: is there enough ETH around to be filling validators? If the answer is no, you might have to wait around until you can finally get the ETH to run the validator in your setup.
Lido has a bunch of liquidity, but for now it has the same problem, although for different reasons. CSM is only a module of their range of choices, and its utilisation is capped at 2% of the total Lido liquidity. This cap was reached soon after CSM became permissionless, so now you'll have to wait again until there is rotation in the ETH in the protocol.
Overall I think CSM is an amazing solution for decentralising Lido's operator set, albeit with some problems (it is capturable by centralized actors by the same virtue of being permissionless). I am a Lido Delgate and put my thoughts on CSM and other topics here. If you have some LDO, I'd appreciate your delegation and I'll be your cypherpunk fighter inside of the biggest ETH project.
Obol's token is out! And you might have some. Eligibility criteria includes Obol Core Community, Obol Contributions holders, Solo Stakers and Rocketpool Node Operators. Check it at claim.obol.org !
Obol is a DVT enabler - that's Distributed Validator Technology - which means that you can have a validator key that's distributed among different keyshares, actors and/or locations in a similar way that a multisig would work: you only need a threshold of the keyshares (say, 5/7) to sign every attestation or proposal. This allows for individual machines / participants to have some downtime without affecting the performance of the validator. Compared to a validator in a single spot, which would go down if there's any problem with the hardware, internet or power, a DV split in 7 could withstand 2 of the keyshares being completely offline and not be offline from the consensus chain's perspective, so it would keep accumulating rewards.
The first thing to do with $OBOL will be to retroactively fund aligned projects. Seed the ecosystem with $OBOL! That's a good move to get alignment in projects, after getting alignment from individuals via the airdrop. If you want to apply go here.
Delegates are those who will decide and vote on the Obol DAO. Of course one can vote directly, but Delegates commit to being on top of the decisions and can save you gas and cognitive effort on the voting. I would recommend both Chuygarcia.eth and myself to delegate to.
The upgrade to v2 of the Smooth contract is underway! The holesky implementation is waiting for the 7 day timelock to be upgraded and tested properly.
Smooth is a MEV Smoothing Pool that offers exposure to more MEV opportunities for Solo Stakers, with the possibility of increasing Execution Layer rewards from 80 to 130% more than the median validator.
The contracts will allow SmoothDAO to ban users that do not use MEV setups, following the Terms of Use that were approved by SmoothDAO. Moreover, a watchtower script is being set up to track the information about the offending validators. Here is an example.
The new upgrade also prepares the way for full compatibility with Eigen Layer, to squeeze maximum value of these validators, maximizing EL rewards with Smooth and restaking rewards.
Decentralized AI is another area of interest for Node Runners. Can we leverage our hardware to deploy and use exocortexes (external brains)? I'm going to be putting my thoughts here and news when I have them 👀
Up until now we had 2 types of LLMs: Big, opaque foundational models of which we didn't know much, but had high performance and high ratings in the benchmarking tests; and Open Source models with which enthusiasts experimented and tried to compete - sometimes approximating foundational models in tests and benchmarks, but seemingly always one step behind.
DeepSeek R1 blurred the distinction between both. DeepSeek is a big foundational model and is also Open Source and easy to recreate. Its performance challenges OpenAI's o1 and o3 and anyone can deploy it given enough power.
...But the truth is it is still a gigantic model and its quantizations (reduced versions to fit in consumer hardware) also come with reduced quality.
What is REALLY exciting is projects like EXO, which allow you to split a big model into different machines, like they did with DeepSeek v3 (the previous version to R1)
I believe splitting models in different pieces of hardware has tremendous potential for distributed AI. The latency between the different layers might be a hurdle for now, but seeing the tremendous improvements made on consumer hardware I'm optimistic we'll see really powerful models run in a distributed manner very soon.
VeniceAI positions itself as a "private" and uncensored AI app. Works similar to chatGPT and others, with a free tier and a pro offer with more powerful models.
The privacy claim is no more than that, a claim, and is not verifiable. You still have to trust them to delete logs and not store prompts and conversations, but it's easier to trust them than to trust DeepSeek or OpenAI.
In any case, they have introduced a token to facilitate API calls and allocate compute power, and airdropped it to potential users. Check if you've got any here: https://venice.ai/token
That wraps up this week’s deep dive into Ethereum’s latest upgrades and the ripples that R1 is causing in AI. As we stand on the brink of new opportunities—from more efficient validator setups to the next generation of AI models—our community of Node Runners remains ready to take on the challenge of being the decentralized infrastructure layer for the next wave of tech. I hope this update sparks ideas and discussions in your own node operations.
Your feedback is invaluable, so please share your thoughts and let us know what topics you’d like to see explored in future issues. Until next week, keep pushing the boundaries and stay connected!
It's hard to keep up with how fast the ecosystem moves! Plus, there is nothing that distills exactly what we need as Node Runners! This is why Chronicles of the Infra World has come to exist: a weekly recap for Node Runners, the Dappnode team, and users to know what's going on in the Infrastructure world and what to do. Subscribe to get your weekly rundown of what's important to you as a Node Runner.
Each week highlights the most important of the week, including new ROI opportunities, protocols, trends, etc. Today's newsletter will focus on:
Ethereum
Pectra
Croissant Issuance
36M Gas Blocks
Permissionless CSM
Obol Airdrop, RAF, and Delegates
Smooth Upgrades
AI
DeepSeek R1
Venice AI Airdrop
Lots happening in the Ethereum world, as usual. Everything except numba up.
Dates:
Holesky |
|
Mainnet | Expected mid-March. No date yet |
The only thing you must do is have your client implementations updated to their Pectra-compatible versions. There are some other things that you can do:
Pectra also brings an optional change on how to manage your Withdrawal Credentials and accumulate rewards. If you want, you will be able to:
Compound rewards in your validator instead of withdrawing automatically everything above 32 ETH, up to 2048 ETH in 1 validator.
Consolidate several validators into one, up to a 2048 ETH in one validator.
If you decide to compound in a validator, whenever you want to withdraw any balance between 32 and 2048 ETH, it will require a manual transaction. After 2048 ETH, automatic withdrawals will work like any balance above 32 ETH nowadays.
I wrote a Twitter thread explaining EIP 7251 in more detail:
For us Node Runners, apart from being able to consolidate into a bigger validator - if you're lucky to have more than one - and simplify key management, it is possible that we also see a reduction on CPU and bandwidth utilization in our machines: If consolidation happens and we see a lower number of validators, this consolidation reduces the volume of consensus messages and attestations broadcasted across the network (less bandwidth). Fewer validators also mean that nodes process fewer attestations and related messages, which can decrease the CPU load on nodes. You could see it as "making Ethereum more efficient" as a whole, eliminating overhead on processing extra validators.
That said, bandwidth change might end up not changing much because EIP-7691 will increase the number of target Blobs from 3 (target) to 6 (max), which might increase the bandwidth.
I've mentioned the EIPs that will affect Node Runners and Home Stakers, but Pectra comes with a bunch of stuff.
• Official meta-EIP (tracker of all EIPs included in Pectra)
• Sassal's Pectra rundown:
Changing Ethereum's issuance is something we're extremely concerned about. While the moneyness of Ethereum is important, it can radically change the landscape of our operations in the Infra world.
Justin Drake dropped a "croissant issuance" proposal:
The proposal consists of:
An arbitrary cap on issuance at an arbitrary target stake - for example 1% issuance at 25% staked
A soft cap at, for example, 50% stake, where there is no issuance. None. Only Execution Layer rewards (EL: tx fees / MEV rewards + potentially commit boost-preconfirmations)
1% issuance doesn't mean 1% APR. With current supply of 120,528,666 ETH, 25% staked would be 30M ETH, and 1% issuance of the supply would be 1.2M ETH, which divided between this 30M would give us around 4% APR on issuance, before EL rewards.
For reference, in our current curve and present day we are at 33M ETH staked - slightly above the 25% target- with an issuance APR of 2.9%.
So the croissant issuance with these mock parameters would actually be very similar, if not better, at rewarding validators at the current level of staking, but it would heavily disincentivize new entrants.
I want to do a deep exploration of the economic forces that would affect the staking decisions (got some serious readings here and here) but it's beyond the scope of this quick weekly recap. Suffice to say that for Home Stakers, their APR would be at the mercy of more forces, and therefore potentially more variable. Right now it only goes down marginally when more ETH is staked. Basically -in the current situation - if you do the maths and is worth staking for you now, it will probably still be in 6 months, and your reaction time is in the months ballpark before the economic circumstances change dramatically. Croissant staking might make you manage your validators more actively, walking the line between opex and issuance changes.
IDK, could be Justin trolling tho.
About 50% of the validators are signaling for blocks bigger than 36M gas!
Why would we want to do that? See table below 👇
Pros | Cons |
✅ Higher throughput (more gas, more stuff fits in the block) Lower gas fees Scaling L1. Not only L2 scaling! | ❌ can increase hardware requirements to process bigger blocks If HW requirements increase, can disincentivise home stakers and increase centralization |
The current increase from 30 to 36M gas seems to be safe and many Home Stakers have signaled for the change. More info at pumpthegas.org
You can now run a validator from 2.4 ETH, and get the rest of the 29.5 ETH from Lido. This is a great way to:
get started staking if you don't have 32 ETH
maximize the utilisation of your node adding "cheap" validators on top of your existing setup
Bear in mind that the APR will probably be lower than this 7.6% (but still higher than the 3% for regular validators) as Early Access validators had a different bond requirement than the permissionless version.
There are several ways to validate with less than 32 ETH: from creating a Vault in Stakewise to gathering with some friends to do a DVT validator to using Rocketpool. All of them are very valid options, but often might run up against the liquidity problem: is there enough ETH around to be filling validators? If the answer is no, you might have to wait around until you can finally get the ETH to run the validator in your setup.
Lido has a bunch of liquidity, but for now it has the same problem, although for different reasons. CSM is only a module of their range of choices, and its utilisation is capped at 2% of the total Lido liquidity. This cap was reached soon after CSM became permissionless, so now you'll have to wait again until there is rotation in the ETH in the protocol.
Overall I think CSM is an amazing solution for decentralising Lido's operator set, albeit with some problems (it is capturable by centralized actors by the same virtue of being permissionless). I am a Lido Delgate and put my thoughts on CSM and other topics here. If you have some LDO, I'd appreciate your delegation and I'll be your cypherpunk fighter inside of the biggest ETH project.
Obol's token is out! And you might have some. Eligibility criteria includes Obol Core Community, Obol Contributions holders, Solo Stakers and Rocketpool Node Operators. Check it at claim.obol.org !
Obol is a DVT enabler - that's Distributed Validator Technology - which means that you can have a validator key that's distributed among different keyshares, actors and/or locations in a similar way that a multisig would work: you only need a threshold of the keyshares (say, 5/7) to sign every attestation or proposal. This allows for individual machines / participants to have some downtime without affecting the performance of the validator. Compared to a validator in a single spot, which would go down if there's any problem with the hardware, internet or power, a DV split in 7 could withstand 2 of the keyshares being completely offline and not be offline from the consensus chain's perspective, so it would keep accumulating rewards.
The first thing to do with $OBOL will be to retroactively fund aligned projects. Seed the ecosystem with $OBOL! That's a good move to get alignment in projects, after getting alignment from individuals via the airdrop. If you want to apply go here.
Delegates are those who will decide and vote on the Obol DAO. Of course one can vote directly, but Delegates commit to being on top of the decisions and can save you gas and cognitive effort on the voting. I would recommend both Chuygarcia.eth and myself to delegate to.
The upgrade to v2 of the Smooth contract is underway! The holesky implementation is waiting for the 7 day timelock to be upgraded and tested properly.
Smooth is a MEV Smoothing Pool that offers exposure to more MEV opportunities for Solo Stakers, with the possibility of increasing Execution Layer rewards from 80 to 130% more than the median validator.
The contracts will allow SmoothDAO to ban users that do not use MEV setups, following the Terms of Use that were approved by SmoothDAO. Moreover, a watchtower script is being set up to track the information about the offending validators. Here is an example.
The new upgrade also prepares the way for full compatibility with Eigen Layer, to squeeze maximum value of these validators, maximizing EL rewards with Smooth and restaking rewards.
Decentralized AI is another area of interest for Node Runners. Can we leverage our hardware to deploy and use exocortexes (external brains)? I'm going to be putting my thoughts here and news when I have them 👀
Up until now we had 2 types of LLMs: Big, opaque foundational models of which we didn't know much, but had high performance and high ratings in the benchmarking tests; and Open Source models with which enthusiasts experimented and tried to compete - sometimes approximating foundational models in tests and benchmarks, but seemingly always one step behind.
DeepSeek R1 blurred the distinction between both. DeepSeek is a big foundational model and is also Open Source and easy to recreate. Its performance challenges OpenAI's o1 and o3 and anyone can deploy it given enough power.
...But the truth is it is still a gigantic model and its quantizations (reduced versions to fit in consumer hardware) also come with reduced quality.
What is REALLY exciting is projects like EXO, which allow you to split a big model into different machines, like they did with DeepSeek v3 (the previous version to R1)
I believe splitting models in different pieces of hardware has tremendous potential for distributed AI. The latency between the different layers might be a hurdle for now, but seeing the tremendous improvements made on consumer hardware I'm optimistic we'll see really powerful models run in a distributed manner very soon.
VeniceAI positions itself as a "private" and uncensored AI app. Works similar to chatGPT and others, with a free tier and a pro offer with more powerful models.
The privacy claim is no more than that, a claim, and is not verifiable. You still have to trust them to delete logs and not store prompts and conversations, but it's easier to trust them than to trust DeepSeek or OpenAI.
In any case, they have introduced a token to facilitate API calls and allocate compute power, and airdropped it to potential users. Check if you've got any here: https://venice.ai/token
That wraps up this week’s deep dive into Ethereum’s latest upgrades and the ripples that R1 is causing in AI. As we stand on the brink of new opportunities—from more efficient validator setups to the next generation of AI models—our community of Node Runners remains ready to take on the challenge of being the decentralized infrastructure layer for the next wave of tech. I hope this update sparks ideas and discussions in your own node operations.
Your feedback is invaluable, so please share your thoughts and let us know what topics you’d like to see explored in future issues. Until next week, keep pushing the boundaries and stay connected!
Chronicles of the Infra World
Chronicles of the Infra World
No activity yet