
On Ethereum's 10th anniversary, we've just crossed a threshold that I believe will fundamentally reshape how we think about blockchain scalability.
Today marks more than just another product launch for us.
As someone deeply embedded in the EigenCloud ecosystem, i can confidently say this represents the most significant leap forward in data availability technology since the concept was first introduced.
EigenDA has achieved 100 MB/s throughput on mainnet.
But numbers alone don't tell the story. What excites me most is that EigenCloud has built something that can finally support the ambitious applications builders have been dreaming about but couldn't previously realize due to infra limitations.
When I tell people EigenDA V2 can process 800,000+ ERC-20 transfers per second or 80,000+ token swaps, I often see their eyes glaze over.
These numbers feel abstract until you realize what they actually mean, we're operating at 12.8x Visa's peak throughput.
This is happening right now, securing over $2 billion in customer assets and powering 75% of all assets on Ethereum L2s using alternative DA solutions.
When Fuel Network and Aevo migrated to V2, they weren't just adopting new technology, they were betting their businesses on our ability to scale with them.

What strikes me most is how we achieved this performance leap. It wasn't through brute force scaling or compromising on decentralization. Instead, EigenDA team made three fundamental architectural decisions that I believe will influence data availability design for years to come.
The most elegant innovation in V2, in my opinion, is how EigenDA team completely reimagined the communication architecture.
In V1, the disperser would send everything at once, metadata and encoded chunks bundled together. It worked, but it was like trying to manage both air traffic control and cargo loading with the same system.
In V2, they created a clean separation. The control plane sends only blob headers (metadata) to DA nodes first. These nodes validate payment and rate limiting information, then actively request the data payloads from the data plane. This pull-based model might seem like a small change, but its implications are profound.
This separation enables permissionless dispersal, something I'm particularly excited about because it opens the door for anyone to interact with EigenDA without requiring explicit permission from centralized gatekeepers. It also optimizes high-bandwidth downloads in ways that weren't possible before.

I've learned that some of the most impactful engineering decisions appear deceptively simple.
Team's choice to standardize every blob to exactly 8192 chunks is one of those decisions.
This standardization transforms encoding from a complex, stateful operation into something beautifully predictable. More importantly, they now encode data only once for all quorums. Previously, adding a new quorum meant re-encoding data multiple times, an expensive operation that created bottlenecks.
Now, you can just encode once and distribute efficiently.
For validators running multiple quorums, it is optimized further: they only host data corresponding to the maximum of their quorum stake, rather than the sum. These efficiency gains compound across the network, contributing significantly to our performance improvements.
LittDB isn't trying to be everything to everyone.
It makes specific tradeoffs that would be unacceptable in a general-purpose database: no data mutability, no read-write transactions, and sequential data expiration based on write order.
But these constraints enable it to excel at what matters most for data availability: embedded key-value storage at consistently high performance on commodity hardware.
The results speak for themselves, we're seeing two orders of magnitude improvement over traditional database solutions.
Understanding the technical innovations is one thing, but seeing how they work together in practice is what truly excites me.
The V2 workflow represents a masterclass in distributed systems design:
L2 sequencers create standardized blobs with exactly 8192 chunks
Payment validation occurs upfront, enabling economic controls for permissionless access
The control plane sends blob headers to DA nodes for validation
DA nodes make pull requests for data payloads from the data plane
GPU-accelerated erasure coding processes data with single encoding for all quorums
Concurrent download connections distribute chunks to operators
LittDB storage provides high-performance persistence
DA certificates eliminate on-chain confirmation delays, reducing latency by 60x
Universal blobkeys simplify blob identification and retrieval

The elegance of this flow is that each component is optimized for its specific role while maintaining the overall system's decentralization and security properties.
What I appreciate most about EigenDA team is the obsessive focus on testing. They didn't just build V2 and hope it would work, they subjected it to conditions that would break most systems.
Their 60-hour continuous load testing across 14 independent validators on three continents wasn't just about proving we could hit 100 MB/s. They pushed the system to 124 MB/s peak performance to understand our true limits and ensure they had headroom for unexpected load spikes.
The geographical distribution was crucial. It's one thing to achieve high performance in a single data center; it's entirely different to maintain that performance across continents with real network latency and reliability challenges.
The fact that they maintained consistent performance in these conditions gives me confidence in the production deployments.
Fuel's journey is particularly meaningful to me. They were the first rollup to achieve stage 2 decentralization with V1, and now Fuel Ignition is the first to use V2 on mainnet as they scale toward their 150K TPS vision.
Aevo's use case in on-chain options and derivatives presents different challenges, high-frequency trading requires not just throughput but also consistent, low latency.
While rollup scaling is transformative, I believe we're just scratching the surface of what's possible.
EigenDA V2's 100 MB/s throughput surpasses what's needed for global-scale payments, which opens the door to applications we're only beginning to imagine.
The EigenCloud vision extends far beyond traditional blockchain use cases.
When I think about AI inference, gaming, and video streaming running on verifiable infra, I see applications that will demand gigabytes per second of data throughput. V2 is a crucial step toward that future.
If you're a dev wondering whether blockchain infra can support your ambitious application, V2 changes the equation.
The question is no longer whether the infra can scale, it's whether you're ready to build something that leverages this new capability.
The 5-second average latency and 10-second p99 latency we've achieved mean that user experiences can finally feel responsive rather than sluggish. The economic model with on-demand payments and reservations makes it accessible to projects of all sizes.
As I write this on Ethereum's 10th anniversary, i can't help but reflect on how far we've come.
Ten years ago, the idea of processing hundreds of thousands of transactions per second on Ethereum seemed like science fiction. Today, it's reality.
But what excites me most isn't what we've achieved, it's what becomes possible next. When builders no longer need to worry about data availability as a constraint, when they can focus purely on creating value for users rather than optimizing around infrastructure limitations, that's when we'll see the truly transformative applications emerge.
Can't wait to see what you guys end up building with EigenDA v2!
<100 subscribers

On Ethereum's 10th anniversary, we've just crossed a threshold that I believe will fundamentally reshape how we think about blockchain scalability.
Today marks more than just another product launch for us.
As someone deeply embedded in the EigenCloud ecosystem, i can confidently say this represents the most significant leap forward in data availability technology since the concept was first introduced.
EigenDA has achieved 100 MB/s throughput on mainnet.
But numbers alone don't tell the story. What excites me most is that EigenCloud has built something that can finally support the ambitious applications builders have been dreaming about but couldn't previously realize due to infra limitations.
When I tell people EigenDA V2 can process 800,000+ ERC-20 transfers per second or 80,000+ token swaps, I often see their eyes glaze over.
These numbers feel abstract until you realize what they actually mean, we're operating at 12.8x Visa's peak throughput.
This is happening right now, securing over $2 billion in customer assets and powering 75% of all assets on Ethereum L2s using alternative DA solutions.
When Fuel Network and Aevo migrated to V2, they weren't just adopting new technology, they were betting their businesses on our ability to scale with them.

What strikes me most is how we achieved this performance leap. It wasn't through brute force scaling or compromising on decentralization. Instead, EigenDA team made three fundamental architectural decisions that I believe will influence data availability design for years to come.
The most elegant innovation in V2, in my opinion, is how EigenDA team completely reimagined the communication architecture.
In V1, the disperser would send everything at once, metadata and encoded chunks bundled together. It worked, but it was like trying to manage both air traffic control and cargo loading with the same system.
In V2, they created a clean separation. The control plane sends only blob headers (metadata) to DA nodes first. These nodes validate payment and rate limiting information, then actively request the data payloads from the data plane. This pull-based model might seem like a small change, but its implications are profound.
This separation enables permissionless dispersal, something I'm particularly excited about because it opens the door for anyone to interact with EigenDA without requiring explicit permission from centralized gatekeepers. It also optimizes high-bandwidth downloads in ways that weren't possible before.

I've learned that some of the most impactful engineering decisions appear deceptively simple.
Team's choice to standardize every blob to exactly 8192 chunks is one of those decisions.
This standardization transforms encoding from a complex, stateful operation into something beautifully predictable. More importantly, they now encode data only once for all quorums. Previously, adding a new quorum meant re-encoding data multiple times, an expensive operation that created bottlenecks.
Now, you can just encode once and distribute efficiently.
For validators running multiple quorums, it is optimized further: they only host data corresponding to the maximum of their quorum stake, rather than the sum. These efficiency gains compound across the network, contributing significantly to our performance improvements.
LittDB isn't trying to be everything to everyone.
It makes specific tradeoffs that would be unacceptable in a general-purpose database: no data mutability, no read-write transactions, and sequential data expiration based on write order.
But these constraints enable it to excel at what matters most for data availability: embedded key-value storage at consistently high performance on commodity hardware.
The results speak for themselves, we're seeing two orders of magnitude improvement over traditional database solutions.
Understanding the technical innovations is one thing, but seeing how they work together in practice is what truly excites me.
The V2 workflow represents a masterclass in distributed systems design:
L2 sequencers create standardized blobs with exactly 8192 chunks
Payment validation occurs upfront, enabling economic controls for permissionless access
The control plane sends blob headers to DA nodes for validation
DA nodes make pull requests for data payloads from the data plane
GPU-accelerated erasure coding processes data with single encoding for all quorums
Concurrent download connections distribute chunks to operators
LittDB storage provides high-performance persistence
DA certificates eliminate on-chain confirmation delays, reducing latency by 60x
Universal blobkeys simplify blob identification and retrieval

The elegance of this flow is that each component is optimized for its specific role while maintaining the overall system's decentralization and security properties.
What I appreciate most about EigenDA team is the obsessive focus on testing. They didn't just build V2 and hope it would work, they subjected it to conditions that would break most systems.
Their 60-hour continuous load testing across 14 independent validators on three continents wasn't just about proving we could hit 100 MB/s. They pushed the system to 124 MB/s peak performance to understand our true limits and ensure they had headroom for unexpected load spikes.
The geographical distribution was crucial. It's one thing to achieve high performance in a single data center; it's entirely different to maintain that performance across continents with real network latency and reliability challenges.
The fact that they maintained consistent performance in these conditions gives me confidence in the production deployments.
Fuel's journey is particularly meaningful to me. They were the first rollup to achieve stage 2 decentralization with V1, and now Fuel Ignition is the first to use V2 on mainnet as they scale toward their 150K TPS vision.
Aevo's use case in on-chain options and derivatives presents different challenges, high-frequency trading requires not just throughput but also consistent, low latency.
While rollup scaling is transformative, I believe we're just scratching the surface of what's possible.
EigenDA V2's 100 MB/s throughput surpasses what's needed for global-scale payments, which opens the door to applications we're only beginning to imagine.
The EigenCloud vision extends far beyond traditional blockchain use cases.
When I think about AI inference, gaming, and video streaming running on verifiable infra, I see applications that will demand gigabytes per second of data throughput. V2 is a crucial step toward that future.
If you're a dev wondering whether blockchain infra can support your ambitious application, V2 changes the equation.
The question is no longer whether the infra can scale, it's whether you're ready to build something that leverages this new capability.
The 5-second average latency and 10-second p99 latency we've achieved mean that user experiences can finally feel responsive rather than sluggish. The economic model with on-demand payments and reservations makes it accessible to projects of all sizes.
As I write this on Ethereum's 10th anniversary, i can't help but reflect on how far we've come.
Ten years ago, the idea of processing hundreds of thousands of transactions per second on Ethereum seemed like science fiction. Today, it's reality.
But what excites me most isn't what we've achieved, it's what becomes possible next. When builders no longer need to worry about data availability as a constraint, when they can focus purely on creating value for users rather than optimizing around infrastructure limitations, that's when we'll see the truly transformative applications emerge.
Can't wait to see what you guys end up building with EigenDA v2!
Share Dialog
Share Dialog
1 comment
EigenDA V2 just launched on mainnet at 100MB/s throughput! This enables 800K+ transactions/second and operating at 12.8x Visa's peak capacity. EigenDA already secures over $2B, more than any other altDA combined. Here is what the new update has unlocked 👇 https://paragraph.com/@coordinated/eigenda-v2-the-fastest-da-with-100-mb-sec-on-mainnet