Multiple Concurrent Proposers, the idea of having multiple active proposers at the same time (hereby referred to as MCP), not to be confused with Multi Context Protocol (its AI counterpart in the colloquial vertical) nor Multi Party Computation (MPC); although some similarity could be found here, is an interesting mechanism to solve censorship issues. In this article, we will cover the what's, the how's, and why having more than a singular proposer be responsible for block proposing is an integral part of improving blockchain mechanism design.
While the general idea of MCP is relatively simple to understand, it isn’t a reality on practically any blockchain today. However, to some extent, Bitcoin's mining pools function somewhat akin to multiple concurrent proposers. Plus, anyone can get a transaction included in Bitcoin if they run a Bitcoin full node.
On the other hand, the multiple concurrent builders of Solana share some similarities with how a full MCP implementation would look, and at least have the concept of several different actors contributing to the creation (but not proposing of a block). On Ethereum, most blocks (~95%) are built through MEV-Boost. While several builders are active at any time, there can only be one auction winner, so it loses the angle that you could argue Solana has with the multiple concurrent builders. The fact of the matter is that there isn't a singular chain today with more than one proposer with the power to propose a block at any time.
The easiest way to think about MCP is to break it down into having more than one proposer providing blocks at a time, and a final merging of those sub-blocks.
These groups of proposers are likely subcommittees (like the current ones in Ethereum) since doing this with the entire validator set would be unfeasible. This also means we must ensure that no single subcommittee is just one staking pool, which could lead to censorship and collusion concerns. Another thing to remember is the relatively unsophistication of the fabled home staker - MCP means increased complexity.
Let's summarise the reasons why MCP has properties worth having:
Reasons for having multiple concurrent proposers:
Increase censorship resistance (more so than ever, especially with current bottlenecks)
Scale the base protocol instead of outsourcing
Disperse MEV as no single proposer (or builder) decides on inclusion
Issues that are immediately clear from implementing MCP
Prevalence of competition in ordering (inclusion and ordering) - rise of PGA?
Invalid transactions - simulations
Hardware requirements increase
Data availability of invalid transactions
Needing a finality gadget
Let’s review the properties one by one, starting with the strengths. Then, we’ll consider whether the possible issues might arise and whether they’re too grand for MCP to be implemented on chains with considerable technical baggage.
Increase censorship resistance
Most blockchains today that have a deterministic approach to finality and consensus derive this from a singular leader that sets the path for what gets included and added to a block (some nuances). Afterwards, a block is propagated, and eventually, most of the validator set agrees to the block and gets added to the canonical chain. In Ethereum, we have subcommittees of this set to enable faster blocks (but longer epochs to get the extensive set to agree). In an MCP setup, several different proposers are building their own local blocks that will eventually be merged. This means that instead of a singular block proposer (or builder/relay, preferably we forgo these in MCP setups), there are now several entry areas into a block. This makes censorship considerably harder (easy to go to https://www.mevwatch.info/ and see the effect a centralised relay has on inclusion). In a world where there are several includers, censorship resistance is hardened.
Current bottlenecks (do note that several teams, including Flashbots, are also working on improving the current situation) are specifically the fact that a single builder wins an auction that a single proposer proposes. This is further exacerbated by a single (trusted) relay sitting in the middle as auctioneer. While the core Ethereum protocol is decentralised, the way we currently get included in the Ethereum blockchain isn't. On Solana, the centralised aspect of the Jito relay/builder is also a legitimate concern they're trying to solve via restaking (the FIRST real yield restaking “AVS”!!). In Bitcoin, this can be solved by running your own full node, which is relatively low effort; however, this does come at the cost of finality. Bitcoin has probabilistic finality and no “finality gadget, " which would be needed to ensure deterministic and fast finality in MPC implementations. Bitcoin relies on the longest chain rule.
Scale the base protocol instead of outsourcing
Generally, to solve issues with the primary protocol, much of the actual development has been outsourced to third-party teams to solve problems inherent with the “stale” design of L1s. This isn't limited to just Ethereum, but goes much broader. By implementing MCP, you start to deal with some of the issues previously solved/created by out-of-protocol implementations. In this case, you heighten hardware requirements (but increase censorship resistance) - potentially a worthwhile trade-off depending on the decentralisation needs of the users of the protocol. Anyhow, this is also something you could expect to see on Solana, especially with the centralised aspects of Jito. Furthermore, since you're outsourcing elements of the block building to several parties, the end result is an increase in total bandwidth
Disperse MEV
One of the more unique outcomes of MCP is that, instead of leaving all the MEV in one basket since either a single proposer or builder decides on inclusion, the MEV is distributed among active proposers in a specific block. This means that you no longer have to hope to win the MEV lottery, but rather, with more likelihood, can expect a more even stream of rewards (most companies, which most validators are, would prefer a constant reward structure). It is also generally a better way to ensure that no single party can re-order to extract MEV (the current case). However, this also fits into the censorship-resistant case.
If you've read any of our other articles in the past, you might be familiar with the term CAP theorem: Three informal properties that you must possess for a distributed system to work as intended.
C: This stands for consistency; basically, the UX should be identical from user to user, and users should feel like they are interacting with a single database every time they use the system.
A: Stands for availability, which is what we also refer to as liveness. It refers to the fact that all messages are handled by nodes in the system and reflected in future blocks/queries. All commands need to be carried out.
P: This stands for partition tolerance (or censorship resistance), which means that the system needs to uphold consistency and availability even during an attack or partition of network nodes.
MCP is one of the best ways to achieve key parts of the CAP theorem (specifically censorship resistance), which are generally waved off as game-theoretic issues. Trust the protocol, not game theory.
Now, positives don't come without negatives, and the CAP theorem works in ways that with something great, something bad comes as well - it is almost impossible to fulfil it in its entirety. As such, let's look at some issues that may arise when implementing MCP.
Issues with MCP that need to be addressed
The main issue is generally considered to be the fact that MCP, to some extent, introduces two phases of competition within a block. The first is an inclusion fee, and the second is an ordering fee. The ordering fee is specifically difficult to deal with since in the first phase, the local producer only has his view of a block—and not the whole view. This means that trying to optimise how much needs to be bid to be somewhere specific in a block is a difficult task.
Not only is it difficult, but it essentially (in an auction-type setup) brings us back to the days of priority gas auctions (PGAs). While you do have much higher guarantees of censorship resistance, you essentially bring back what MEV-Boost sought to adhere to—high median gas fees in competitive slots, pricing out others during inclusion, etc.
There are other issues around transactions than just how you order them (both from a local and global perspective). Here we are referring specifically to part of the propagation of the local<->global view of a block, and what sending invalid transactions means. Considering that, at the beginning of a phase (before the merging of blocks into a singular, multiple proposer built block), there is no view of what state changes might affect other transactions from different proposers. This means there's the possibility that proposers give each other invalid transactions (even more of an issue if this is posted on-chain, as data availability). It could also be that the parameters a validator in the current MCP set has aren't being followed, such as max gas (and going beyond that). These specific issues are rather simplistic to solve since you need an actor (or set of rules in the protocol) that, when the previously mentioned DA is revealed, you can order txs by fees and take out those with the lowest fees with the same wanted state changes. Again, the issue with this is that we reinforce a problem we had already solved (PGAs). However, if we don't have any mechanics such as auctions (PGA) for searchers/builders to control block positioning, then we just resort to spam and reinforced latency games. Everything we'd also discussed likely breaks the likelihood of preconfs.
An extra consideration on both Ethereum (post Pectra, so now) and on Solana is the fact that transactions aren't invalidated by nonce w/ 7702 and obviously the lack of nonces (for transactions, not accounts) on Solana. This means that figuring out if a transaction is invalid or not becomes quite the issue - essentially, you'd need to simulate all combinations to figure out what the correct ordering should be, which will likely add considerable bandwidth strain to the network. On Solana, the already decent hardware requirements will likely make this process easier, but on Ethereum, an increase is definitely needed on the hardware side. However, one likely way to solve this on the Ethereum side is that when the merging of subblocks happens, the actual ordering is computed on the execution client (instead of trusting the builder+relay), again, we need better hardware.
On the DA side, as mentioned earlier, another important aspect is the issue of leaking these invalid transactions on-chain (making them essentially free). This further exacerbates the simulations mentioned above before consensus, meaning you can filter the invalid transactions during merging. Some various implementations have been done for FOCIL (sending address rather than tx) work that could likely be reused for this (unless we just trust the simulation completely, although if there's an actor involved here (and not the protocol rules), then it could interfere with simulations by making other transactions invalid).
As mentioned in the summary, a likely needed aspect of implementing MCP is the fact that we'd likely need a finality gadget that handles synchronisation - this is also what we've alluded to above in the ordering simulation pre-consensus aspect. This also introduces the ability to delay the block proposal for timing games, which we already see during MEV-boost auctions. An effect of this could be seeing other blocks before proposing your own, creating possibilities for sending transactions that invalidate others’ (especially useful for a searcher). If we implement extremely stringent rules to combat timing games, we run into an issue of shutting off validators that aren't as performant as others (meaning more missed blocks).
Possible ways to combat the timing games could also be obtained from learning from improvements done by chains such as Monad, which utilises asynchronous execution (delayed execution). Here, you could implement a rule that says the full effects of a transaction set from all active proposers in a slot can't be known until the entire set has been constructed.
This considerably limits throughput, since the likelihood of the same transactions being included by more than one proposer is relatively high. The delayed execution also means that despite being “included” in the subblock, the possibility of not getting included in the merged block is possible, meaning that a reverted transaction (despite inclusion) will happen. This mirrors back to the double inclusions mentioned in the beginning. Note that you'd likely need some finality gadget that acts this way (and as such executes, propagates and finalises the block)
While we've primarily focused on the Ethereum side so far, it is worth noting that Solana is more actively pursuing MCP. This has been exacerbated by the fact that Max Resnick is now part of Anza, and Anatoly has been outspoken about wanting to implement it as well. Anatoly recently had a post covering it, so let's summarise parts of their focus and the issues they foresee:
What if blocks arrive at different times from different validators (this could also be timing games)
How to merge transactions, again, we've covered this before
How to split capacity in a block (max gas) between validators to increase bandwidth to the max
Issues arising from wasted resources (via having the same transactions land in several subblocks, also an issue raised above)
Generally, many of the problems in implementing MCP on Solana resonate with the issues on Ethereum. However, Solana puts a lot more weight on optimising bandwidth and performance, meaning that managing block resources and merging blocks matter a lot more while ensuring that consensus stays robust.
Another key aspect we alluded to at the article's beginning: MCP can also be used to scale the protocol, not just strengthen it. Not only that, but it can also be used to protocolise application-specific sequencing (ASS) via ordering (which we have a lot of writing, see here & here). There could be a world where, instead of a proposer for XYZ transactions, an app is the proposer and orders its own set of transactions according to its wishes (the world delta is working towards) - or vice versa, the app gives a proposer a set of rules to follow regarding ordering its transactions. Interestingly enough, MEV taxes to applications (the tx originators) together with MCP is also being explored (and a lot easier since not a single proposer controls inclusion)
In a recent post by Max and Anatoly, they view MCP as enabling tighter spreads (the decentralised NASDAQ idea) via application-specific sequencing. In the current climate, as mentioned before, only a single leader proposes blocks. This means that in a situation where prices move, when using an orderbook, quoters will seek to cancel certain bids. In the single proposer situation on Solana, a Jito auction is run as the proposer has a monopoly. Preferably (as on Hyperliquid), cancels should be prioritised to allow makers to have tighter spreads. Hence wanting ASS to prioritise exactly that for apps, with a single leader they have monopoly to run auctions, with MPC we can forego this. However, in all likelihood ASS in this case is going to be limited to state silos. The presented proposal essentially means that application developers can define what should be prioritised for a specific account (i.e., a cancellation). The highest priority txs that get executed first for that specific account (not necessarily max tip, but instead cancels since they're essential for active liquidity). The idea is that you have a threshold for everyday transactions regarding fees, while certain prioritised transactions can exceed this.
We also discussed the issue of both inclusion and ordering fees above. Solana seems fine with this. Inclusion fees will go to the validator who included a transaction, while ordering fees will be paid to the protocol (burned). When you merge a block from the many subblocks, you then just take the merged set of transactions and order them according to the set ordering fees (and execute them).
The setup above plays into Turbine, Solana's block propagation mechanism. Leaders (MCP) use shreds, they send it to a relay in the Turbine tree - relays should have shreds from all leaders. Relays send ping to single consensus leader about shreds - this leader must include large enough fraction of messages and then broadcast + come to consensus.
With the recent announcement of Alpenglow, this implementation might look slightly different considering the single layer of relay nodes. As well as the removal of voting as on-chain transactions (now on-chain). This coupled together should make running a validator cheaper which would likely increase the amount of them, and allow less sophisticated ones to participate. This sounds great for decentralisation, but could have likely effects on the performance of the chain. One questions interesting to ask, with the new announcements is specifically how they plan to deal with faults of validators in a world where we have MCP on Solana.
MCP in other ecosystems:
There has also been some work done on the Cosmos side on implementing MCP, notably Informal Systems has just released a spec for multi-proposer in a BFT consensus model. Notably, they use a secure broadcasting protocol, wherein vote extensions force each sunblock from a validator to be acknowledged by other validators. The block building part of Tendermint/CometBFT then agrees on a set of these. This means you have a large number of subblocks from specific validators.
Sei is also currently working on MCP (and trying to become the first to implement it) via their work on Sei Giga, which derives part of its design from the Autobahn paper, which we highly recommend reading. The general idea is that data availability is separate from ordering, and made available faster via separate (multiple) lanes before being ordered into a global chain. It differs slightly from the idea of MCP on Ethereum because validators aren't necessarily all providing blocks simultaneously in set slots, but rather continuously and then merged to a global view.
Patrick O'Grady of Commonware is also working on an approach to this.
Lastly, Delta also has a design wherein the base layer functions as a censorship-resistant bulletin board. At the same time, each application runs its concurrent sequencer, which produces blocks settled onto a global state layer.
Recommended Reading:
The Path to Decentralised Nasdaq, Max Resnick (Anza) & Anatoly Yakovenko (Solana Labs)
rain&coffee