Guide to Ethereum Roadmap: https://shows.banklesshq.com/p/guide-to-the-ethereum-roadmap-jon?s=r#details
What will the vibe be after the merge?
Scale the computation, base-layer for roll-ups
True scaling is not only the TPS. It's about the throughput relative to what is the cost to validate
Scaling computation without sacrificing validation. There might be some specialization and centralization in block production, but validation need to be decentralized (kept to regular users)
Computation will be held at rollups. Rollups scale computation while leveraging Ethereum’s security
The transition from the older version of Ethereum roadmap (scale the actual layer 1 through sharding), to the new version, which focuses on data availability (essentially scrapped execution sharding and is now exclusively focused on data sharding to maximize Ethereum’s data space throughput)
Three major components:
Danksharding and ProtoDanksharding (DS + PDS)
ProtoDanksharding: Half-way one to DankSharding
Proposer / Builder Separation (PBS)
Data Availability Sampling (DAS)
What's this end state look like?
Builder: responsible for making this really big block together that has all the data, the beacon chain block
Validator: very low resource requirements
DAS (Data Availability Sampling): Check availability
Data availability VS data retrievability
DAS allows nodes (even light clients) to easily and securely verify that all of it was made available without having to download all of it (polynomial extentions, extend the data using a Reed-Solomon code, KZG commitments)
The data will only be available on Ethereum for a certain sufficient amount of time so that the whole world is supposed to have downloaded the data. But you don't keep it on chain forever
Resource needed: hard-drive space
Make it easy for validation: Validators only check a minority subset of the data, while as the ecosystem, we get to leverage the full expression of all of the data
EIP 4488:
How Layer 2 Rollups publish transaction data on Layer 1: calldata
Calldata compression, which will allow the pruning of blockchain
Why do people want to retrieve historical data:
Dapps
Explorers
Data storage: it's not the task for Ethereum but a question for Dapps (Can be stored by roll-up providers)
PDS (Proposer-Builder Separation):
Two roles:
Miners (Block builders): Miners “vote” by building on top of the previous block
Validators: after the merge validators will vote directly on blocks as valid or invalid
PDS: Specialized builders will put together blocks and bid for proposers (validators) to select their block
A callback to Vitalik's endgame: centralized block production with trustless and decentralized validation
Builders
receive priority fee tips plus whatever MEV they can extract
Proposer:
Selected from the validator set using the standard RANDAO mechanism
Commit-reveal scheme: the full block body isn’t revealed until the block header has been confirmed by the committee
Block time: Blocks after the merge will be a fixed 12 seconds, so here we’d need 24 seconds for a full block time (two 12-second slots)
DS (Danksharding): Check availability + block reconstruction
Unified settlement and DA layer
One builder creating the entire block together, one proposer, and one committee voting on it at a time
Validators attest to the availability of 2 rows and 2 columns in their assigned slot
PBS (EIP-4844)
Rollups today use L1 “calldata” for storage which persists on-chain forever
