Share Dialog
Share Dialog
<100 subscribers
<100 subscribers


EOF(ork) in the Road: The Controversy around EVM Object Format (EOF) is written by Josh Weintraub (@0xTraub). Special thanks to Rika Goldberg (@RikaGoldberg), Kaleb Rasmussen (@kaleb0x), and Cole Schendl (@404_cole) for additional contributions and feedback.
_____________________
EOF is highly controversial among client teams and Solidity developers with no clear consensus.
Implementing EOF has significant benefits for code security, backwards compatibility, and gas efficiency.
It accomplishes this at the cost of adding complexity and potentially creating technical debt without clear short-term benefits to end-users.
EOF may be a proxy for a larger debate within the Ethereum community about protocol ossification and prioritizing price/value-extraction versus technical competition with alternative L1s blockchains.
The Ethereum Virtual Machine (EVM) is the backbone of Ethereum that enables smart contracts to exist. It is the part of the code that a node runs, which actually loads and executes smart contract code during a transaction. Without the EVM, all you would be able to do is send Ether.
A virtual machine is very common in the Web2 World. When a computer simulates another computer, the computer being run within the other is a virtual machine. The benefit of using virtual machines is that they can be used to replicate computation on different devices. Multiple machines, each with different operating systems and geographically distributed, can perform the same computation.
Opcodes are instructions a computer uses to perform operations. Each opcode performs a specific operation. To add two numbers, use the ADD opcode. When one contract wants to call another, it uses the CALL opcode. EVM bytecode is a series of opcodes executed sequentially and produces an output. Two different machines, with access to the same bytecode and the same start state, will produce the same output. This is how Ethereum maintains consensus. Verifying a transaction means running it through the EVM and ensuring that the execution does not revert. It was the first mechanism for enabling smart contracts on a blockchain, and today remains an industry standard chosen for many new chains and L2’s.
EVM Object Format (EOF) is an attempt by the Execution Layer(EL) teams to revamp how smart contract bytecode operates and the rules of the EVM. It is part of what Vitalik nicknamed The Splurge on the Ethereum roadmap. It is a combination of nearly a dozen different EIPs all targeting the execution-layer. EOF is several years into development and slated to be included in the next hard-fork (Fusaka) along with a basic version of data-availability sampling. However, the developer community has begun to push back hard against potentially implementing EOF, with many calling for its delay and exclusion from Fusaka. This article has been abbreviated significantly to focus on a few of the controversies surrounding EOF. Versioning, Jumpdest analysis, and SWAPN were chosen as uniquely salient and digestible, but there are many other aspects of EOF, including:
All of which are highly complex with reasonable arguments on both sides. This article should not be considered a complete and authoritative analysis of EVM object format, but rather to highlight the complicated and heated debate around the subject currently occurring.

For the most part, the EVM has not been significantly altered since its genesis. A contract deployed in the genesis block is executed using the same rules a contract deployed yesterday would. This means that any changes to the EVM must be compatible with 100% of existing contracts. Every time a rule is changed, it threatens to break a contract’s functionality in some way. Given the remaining items on the Ethereum roadmap, this can cause unexpected difficulties in designing alterations to Ethereum.
A few years ago, there was a proposal to deprecate the `SELFDESTRUCT` opcode, which allowed a smart contract to delete itself from the blockchain, as a precursor to the implementation of The Verge. As it turned out, there were several dapps on mainnet that made use of SELFDESTRUCT as part of their core functionality. Deprecating the opcode would mean immediately rendering all of those contracts unusable. These applications were not the largest in TVL, but they were still actively being worked on by the protocol team and had user deposits that could not quickly be withdrawn. This created an ideological split in the Ethereum developer community. Many argue that supporting Verkle Trees should be prioritized over supporting small applications with limited TVL/PMF. However, to allow the community to brick a contract unexpectedly and without recourse would go against the core principles of Ethereum and present a series of existential questions about its core values. In the end, a compromise was found between protocol and core developers, but this debate only occurred because the EVM doesn’t have a versioning scheme.
A versioning scheme would allow EVM maintainers to write new rules for how the EVM should operate without worrying about it breaking old ones. When the EVM executes an opcode, it would first check whether the bytecode should be executed under legacy rules or EOF rules, and if EOF rules, which version. By adding versioning, contracts can be executed under rules depending on when they were deployed. This allows a contract developer to be confident that the rules their contract is executed with don’t change. If the code is valid at deployment, it should be valid forever and executed the same way, even if the network’s execution rules change for new contracts.

There are two ways to analyze what code does: Static and Dynamic analysis. Static analysis is performed on code that is not currently running, dynamic is the opposite. Dynamic tests are the kind most people associate with code testing because they involve coming up with test cases and executing the code directly to make sure it does what you want. Static analysis tools look at the raw bytecode and stitch together what it’s supposed to do without ever running it, which enables massive gains in speed on larger codebases over dynamic analysis. There is significant demand from the Ethereum developer community to build better static analysis tools for bytecode for research purposes, including MEV Bots, hackers, and highly optimized applications. One of the central goals of EOF is to aid in building static analysis tools by making it easier for bytecode to be analyzed.
Why is EVM bytecode hard to statically analyze?
EVM bytecode follows no structure. When you call a contract, the EVM part of the EL client loads the bytecode into memory and starts executing from the first instruction until it either reverts, runs out of gas, or returns successfully. This makes developers’ jobs very hard because if there’s no specific structure to the code, then looking for bugs quickly becomes a needle in a haystack. Some challenges include:
EOF seeks to impose a structure on bytecode to simplify analysis and EVM maintenance.

For example the opcode 0x5b, JUMPDEST.
JUMPDEST is an opcode. It’s called after a `JUMP` opcode. The JUMP opcode simply moves you around to different parts of the code. So if the program counter, which tracks where you are, is on instruction 50 and it needs to do something at instruction 150, the JUMP opcode is used to continue executing from this part of the code. However, the location that you jump to must have a valid JUMDPEST opcode at it. It is a security feature that was introduced by Ethereum after genesis. By ensuring that there’s a valid JUMPDEST opcode at the location, an attacker could not hijack the execution of a program. If an attacker managed to find a bug that allowed them to jump to whatever part of the code they wanted, they could basically do anything they wanted and begin executing parts of the code that they should not be able to. There must be specific JUMPDEST opcodes at certain locations, typically located around various authorization checks.
When looking directly at a bytecode, there are two ways in which this opcode may appear. The first is its intended purpose to indicate a JUMP destination. However, it is also possible to use that same data for something else. In human-readable ASCII, 0x5b is designated as the open-bracket “[“‘. Users may want to use that character in their code as a literal value. When statically analyzing code, it is quite difficult to discern whether a byte is being used as an opcode or as a piece of data. One of the mechanisms EOF uses to fix this is by separating code into distinct sections, isolating raw data from executable opcodes. This ensures there is no ambiguity on code analysis and makes it easier to deconstruct any raw data being used. Take the following examples:
0x…a5b1c4d908504a9c5ba4f8…
0x…a5b1c4d964504a9c5ba4f8… (1)
1: While this would be valid bytecode by the EVM as every opcode is technically valid, this code would not run without reverting, and has been arbitrarily created for the purposes of the example.
Can you tell which is the valid JUMPDEST? The second one uses the hex-value 0x5b, but since it’s preceded by a 0x64 (PUSH4), this means it shouldn’t be interpreted as a JUMPDEST. This quickly becomes very difficult to manage as bytecode size and complexity increases.

What is JUMPDEST analysis?
When a contract is called, the EVM performs JUMPDEST analysis. It goes through the bytecode of the contract and looks for every usage of the JUMPDEST (0x5b) opcode. It then compiles this information into a format retrievable later from all the valid JUMPDEST locations. When the EVM during execution encounters a jump, the code checks that the destination is a valid JUMPDEST from the analysis it did earlier. However, the issue comes from the fact that it has to be performed every time a contract is invoked. For a contract like USDC that may be called in every single block, that’s a lot of computation being needlessly expended. It’s also part of the reason why Ethereum contracts are limited in size.
Currently on mainnet, contracts are prohibited from being larger than ~24.5Kb in size. The contract size limit was decided a long time ago as part of a security solution to a potential denial-of-service attack on nodes. Many developers want to get rid of it, as they believe that, similar to gas limits, hardware has evolved to the point where nodes can handle an increase in size. Many developers oppose increasing the contract size limit as it may cause higher overhead for EL clients due to larger contracts taking longer to analyze. One of the central goals of EOF is to change the rules so that instead of doing JUMPDEST analysis every time a specific contract is loaded, it’s only done once at deployment. The structure of this actually enables any kind of code validation beyond just JUMPDEST at deployment time. Since this reduces the overhead of executing contracts, it paves the way for gas limit increases in the future as well.


So that sounds reasonable, but why can’t we just do that anyway without EOF if it’s causing so many issues
Since EOF is close to a dozen EIPs, the developers decided that they should all be introduced at once, especially as many rely on one another to function. If this change to code validation is implemented, this can have significant positive impacts for developers, the most important of which is eliminating the contract size limit. Developers hate the limit because it prevents contracts from being large enough to do everything they want to do. Most often, when developers write contracts that exceed the size limit, they are between a rock and a hard place. Conventional wisdom would say to just slim down the size and complexity of your contract to fit the limit, but developers don’t always think that way. Often, complexity can't be sufficiently reduced to fit within the limit. This led to the creation of the Diamond-Proxy standard (ERC-2535). Diamond proxy is an architectural design where the functionality of a contract is essentially split among several contracts, which each serve a specific purpose and interact with each other in part of the same application, because to combine it all into one file would make it too large to be deployed.
Diamond-Proxies can have benefits for contract upgradability but at the cost of security and gas. Several contracts communicating with each other introduce new configuration complexity that is susceptible to security vulnerabilities. Similarly, having multiple contracts deployed and configured increases both deployment and runtime costs from a series of external calls. It is also argued that eliminating the contract size limit would enable developers to build more complex and feature-rich smart contracts that are currently not feasible. Currently, the smart contract development community is united in their dislike of the contract size limit and want to remove it, but specific plans for its removal remain stalled due to issues surrounding exactly how to do so safely. Some have argued that the existing gas limits in place should prevent abuse of a no-limit system, but the exact merits of that argument are beyond the scope of this article. Various L1s & L2s have recently been experimenting with removing this themselves, such as MegaETH, Monad, and Starknet.
Anybody who has spent any amount of time writing Solidity code would already be familiar with the “stack too deep” error code. It occurs during compilation and is perhaps the single largest pain point of writing Solidity code to date. However, since Solidity has matured, there have been a variety of methods invented by developers to work around these limitations, rendering the error code more of an annoyance than an unsurpassable barrier to development. However, that also often results in code that is messier and less gas-efficient. One of the core EIPs as part of EOF is to replace SWAP1-16 and DUP1-16 with SWAPN and DUPN. These new opcodes would allow manipulating of a stack variable at any depth, eliminating the stack-too-deep error forever. Naturally, when this was announced, developers were very excited about the elimination of the error. However, client devs quickly pointed out that there were a variety of potential issues with JUMPDEST analysis that come from allowing more dynamic opcodes.
This has caused a significant debate between solidity developers, and the lower-level developers who implement the tools they rely on. Many have argued that instead of introducing new complexity to the EVM-level, the issue should instead be tackled at the compiler-level through optimizations. Both sides have valid points and there is no clear consensus.
EOF is NOT a “number go up” upgrade like danksharding or the merge. EOF is strictly a technical upgrade for the developers that make Ethereum run and drive the platform forward. There is no financial case for how EOF affects tokenomics, liquidity, usability, etc. that the Ethereum community tends to associate with price. This should not be considered a negative, but it is difficult to elucidate a case for the short-term benefit to Ethereum’s price. However, if one is to take a longer-term view of Ethereum, then the value proposition becomes somewhat clearer.
EOF is important because it enhances developer tooling. However, eliminating the stack-too-deep error is not a benefit that end-users will notice, but the overwhelming majority of developers desire. Implementing EOF improves the security of smart contracts and can be beneficial in preventing hacks. Raising contract size limits can enable new applications currently constrained by size restrictions. Static relative jumps shorten development time and lower gas cost. Code validation enables future EVM upgrades in less time with better backwards compatibility. These are all things that trickle down to end-users later, in the form of better applications, shorter development times, lower gas costs, and higher security. Similarly, despite the criticism of Ethereum’s development time, EOF getting to this point should nonetheless be seen as a positive for the long-term stability of the network given the overwhelming complexity and difficulty in implementation.
The benefits of EOF are difficult to explain to the layman. The merge, danksharding, and EIP-1559 all have easy-to-understand value propositions for users and provide good narratives for Ethereum. EOF does not in the same way account abstraction does. There are significantly harder challenges that still need to be solved, and the continued prioritization of EOF could potentially crowd out progress on these. It can also be argued that depending on the issue, the work done on EOF may potentially make their implementation easier by knocking down roadblocks that would only show up later on in the process if nothing was done, similar to SELFDESTRUCT’s deprecation. This cannot be completely known now, and it is equally likely that EOF could have the opposite effect, causing more issues down the road.
The debate over EOF is rare because it appears that the developer community is not in alignment over whether or not it should go into effect. Twitter discourse as well as the Ethereum-Magicians Forums have been in very heated debate over the subject, and it is not clear which side has more support. Client developers have valid concerns about the additional technical debt that they may be incurring. There are also ideological debates about the simplicity of Ethereum as a base layer. Many argue that the L1 should remain as simple as possible, and that the addition of excess rules and bytecode formats present an affront to some of the ideological principles Ethereum is founded on. These arguments are typically coupled with debates about protocol-ossification, of which Bitcoin’s own community should be studied as a relevant analogue.
EOF does, however, potentially introduce new technical debt for client teams. Given that the EVM operates at the execution layer, keeping the codebase simple can enhance development time and security while lowering computational overhead. This upgrade could have the opposite effect and create an astounding amount of new complexity in the codebase. This comes from having to maintain two versions of an already complex project. It’s not difficult to foresee how additional rulesets can become an increasingly sprawling mess of branches in the code. Every new rule becomes one more piece of code that must be maintained in the future and is a liability.
Many chains in the crypto industry have attempted to eschew the EVM in recent years. This is coupled with their insistence that a different architecture for decentralized computation can enhance scalability. These include Solana, Aptos, and Sui. These competitors do claim the ability to achieve higher levels of throughput under their alternative models. On the other side, some argue that given the EVM’s continued dominance and large lead on the competition, major structural changes are not needed to compete in the marketplace. Chains and L2s such as Fuel argue that rather than EOF, strategies such as parallel computation can result in the necessary efficiency gains to achieve sufficient scalability.
One of the largest concerns among Solidity developers is concerns around how it interacts with the rollup-centric roadmap. Not all L2s are equivalent to the L1. When new features are introduced on mainnet, there is often significant lag time between then and when those features are implemented on an L2. This is why you often hear about EVM-compatibility vs. EVM equivalence. When a new change is made on the L1 it must eventually be implemented on all layer-twos.
As tools for deployment improve, more protocols have embraced deploying on as many chains as possible, as it’s quite easy. For many chains this means simply changing an RPC URL and hitting deploy. But for other chains that do not have exact feature parity this can be a real issue. If the target chain does not support something, then application developers may be forced to recompile and maintain multiple separate versions of bytecode based on the chain. This is a frequent pain point for multi-chain developers in the status quo over existing functionality like Create2 and Transient Storage. Developers at rollups such as Optimism have expressed support for implementing EOF on OP if integrated on mainnet, but this is by no means a perfect solution and involves additional trust assumptions.
While scalability is not a primary goal of EOF’s implementation, one additional benefit to it would be decreases in the proving time for the generation of Zero-Knowledge proofs over EVM execution. Since EOF decreases the number of possible execution paths and adds additional structure to bytecode, ZK-proof generation can progress significantly. As illustrated by the chart below, this results in smaller proof sizes and faster generation over arbitrary computation in the EVM. Currently, many bottlenecks still exist for the computationally expensive generation of ZK-proofs, which prevent their enshrinement into various parts of the Ethereum-stack. Completion and deployment of EOF thus stands to help accelerate the development and implementation of Zero-Knowledge cryptography through Ethereum. This would have significant long-term benefits to the ecosystem, such as enhancing privacy-preserving technology, shortening of block-times, interoperability between L2’s, and complete enshrinement of ZK-proofs into block verification. However, Zero-Knowledge cryptography remains in its infancy, and it is possible that such improvements may occur inevitably due to increasingly rapid improvements in the general engineering of zero-knowledge-proof generation.

Not all objections from core devs are over EOF’s existence, but rather the process of implementation. Fusaka already plans to ship with a base implementation of data-availability-sampling, which will increase network scalability by several orders of magnitude. It is argued by developers that Fusaka should not be expanded in scope. Furthermore, proceeding with EOF potentially crowds out available resources for the remaining work for larger-impact upgrades like the verge and the purge. One option is to have L2s implement EOF first to gauge the potential impact and perform more analysis. Since many L2 sequencers use modified versions of Geth, or other clients which have already implemented EOF, this is a viable option. This would potentially be a less damaging choice as years of work have already gone into getting EOF to its current status, and implementation on L2s would prevent this work from being wasted. Few are arguing for it to be completely scrapped, but without a clear path to inclusion, the future of EOF would become uncertain.
This entire debate, while simmering below the surface, only really exploded recently when Solidity 0.8.29 was announced, which focused on preliminary support for EOF. The Solidity compiler team has announced their support for EOF recently as well. As of writing, Geth, which makes up ~43% of EL nodes, has come out against EOF in Fusaka, while the second largest, Nethermind (36%) has come out in favor. Notable researchers Dankrad Feist(2) and pcaversaccio argue adamantly against EOF with Tim Beiko and Vitalik in favor. This has also ignited further debate about the decision-making authority of various client developers and teams, specifically around potential veto-powers for EL upgrades. As of March 25, 2025 the team behind Besu Client and
2: The researcher behind the appropriately named: “Danksharding” scaling protocol for Ethereum
There is also a series of potential options for modifying the scope of EOF to be more limited and remove some of the more contentious issues. As it stands, one of the most contentious issues surrounding EOF concerns the significant amount of breaking-changes to application-development on Ethereum, and how developers must adapt. Many of the options and debate revolve around how best to manage this and minimize roadblocks for developers. This includes allowing both legacy and EOF contracts to be deployed and leaving the choice to the developer which version to use. It has been agreed by the core developers that EOF should only be excluded if it would result in a delay of PeerDAS shipping as well.
The debate over EOF is ongoing, and new information is being released constantly. However, it should also be noted that this spirited debate should be seen as a positive for the community. Despite the heated conversation, many prominent researchers and developers have entered the debate because they want to ensure the best possible future for Ethereum’s technological superiority. EOF is not being built in the shadows, and the debate is playing out in public with some of Ethereum’s best and brightest. There has not been a deference to authority, and many of the community’s most venerated developers are on opposite sides of the issue. What ultimately ends up happening next is unknown, but there should still be confidence that whatever ends up happening will be the best course of action, and not the result of an unelected and unaccountable shadow group of Ethereum’s leaders.
While EOF remains controversial among developers, it represents a significant milestone in the maturity of Ethereum. The initial design of the Ethereum Virtual machine was intended to be simple because the scope of what was possible with smart contracts was limited. As time goes on, and as the crypto development ecosystem evolves, EOF is another indicator of how Ethereum must grow with it. EOF has the potential to improve the development experience and position Ethereum to lead the pack in the next generation of smart contract development, or it could have little effect at all. The effects of EOF may not be felt right away, and only time will tell whether or not the impact is worthwhile. Every item on the Ethereum roadmap thus far and into the future was meant to improve Ethereum in some way, and EOF is no different, this time targeting shortcomings of the execution-layer, just as the merge targeted the consensus layer.
Controversy comes with the territory — even The Merge drew opposition. However, the mere act of shipping such a complicated upgrade, championed by some of Ethereum’s most dedicated, highlights its continuing relevance in the increasingly-competitive L1 market. Regardless of the impact, the perseverance and dedication of the development community in iterating and prioritizing improvement of the technology over short-term profiteering should be a sign that Ethereum is on the right track and poised to succeed well into the future.
This article attempts to be as accurate as possible, but due to the highly fluid and constantly evolving nature of EOF some information may be outdated or incorrect. If you find any errors or have feedback, please reach out to the author directly.
EOF(ork) in the Road: The Controversy around EVM Object Format (EOF) is written by Josh Weintraub (@0xTraub). Special thanks to Rika Goldberg (@RikaGoldberg), Kaleb Rasmussen (@kaleb0x), and Cole Schendl (@404_cole) for additional contributions and feedback.
_____________________
EOF is highly controversial among client teams and Solidity developers with no clear consensus.
Implementing EOF has significant benefits for code security, backwards compatibility, and gas efficiency.
It accomplishes this at the cost of adding complexity and potentially creating technical debt without clear short-term benefits to end-users.
EOF may be a proxy for a larger debate within the Ethereum community about protocol ossification and prioritizing price/value-extraction versus technical competition with alternative L1s blockchains.
The Ethereum Virtual Machine (EVM) is the backbone of Ethereum that enables smart contracts to exist. It is the part of the code that a node runs, which actually loads and executes smart contract code during a transaction. Without the EVM, all you would be able to do is send Ether.
A virtual machine is very common in the Web2 World. When a computer simulates another computer, the computer being run within the other is a virtual machine. The benefit of using virtual machines is that they can be used to replicate computation on different devices. Multiple machines, each with different operating systems and geographically distributed, can perform the same computation.
Opcodes are instructions a computer uses to perform operations. Each opcode performs a specific operation. To add two numbers, use the ADD opcode. When one contract wants to call another, it uses the CALL opcode. EVM bytecode is a series of opcodes executed sequentially and produces an output. Two different machines, with access to the same bytecode and the same start state, will produce the same output. This is how Ethereum maintains consensus. Verifying a transaction means running it through the EVM and ensuring that the execution does not revert. It was the first mechanism for enabling smart contracts on a blockchain, and today remains an industry standard chosen for many new chains and L2’s.
EVM Object Format (EOF) is an attempt by the Execution Layer(EL) teams to revamp how smart contract bytecode operates and the rules of the EVM. It is part of what Vitalik nicknamed The Splurge on the Ethereum roadmap. It is a combination of nearly a dozen different EIPs all targeting the execution-layer. EOF is several years into development and slated to be included in the next hard-fork (Fusaka) along with a basic version of data-availability sampling. However, the developer community has begun to push back hard against potentially implementing EOF, with many calling for its delay and exclusion from Fusaka. This article has been abbreviated significantly to focus on a few of the controversies surrounding EOF. Versioning, Jumpdest analysis, and SWAPN were chosen as uniquely salient and digestible, but there are many other aspects of EOF, including:
All of which are highly complex with reasonable arguments on both sides. This article should not be considered a complete and authoritative analysis of EVM object format, but rather to highlight the complicated and heated debate around the subject currently occurring.

For the most part, the EVM has not been significantly altered since its genesis. A contract deployed in the genesis block is executed using the same rules a contract deployed yesterday would. This means that any changes to the EVM must be compatible with 100% of existing contracts. Every time a rule is changed, it threatens to break a contract’s functionality in some way. Given the remaining items on the Ethereum roadmap, this can cause unexpected difficulties in designing alterations to Ethereum.
A few years ago, there was a proposal to deprecate the `SELFDESTRUCT` opcode, which allowed a smart contract to delete itself from the blockchain, as a precursor to the implementation of The Verge. As it turned out, there were several dapps on mainnet that made use of SELFDESTRUCT as part of their core functionality. Deprecating the opcode would mean immediately rendering all of those contracts unusable. These applications were not the largest in TVL, but they were still actively being worked on by the protocol team and had user deposits that could not quickly be withdrawn. This created an ideological split in the Ethereum developer community. Many argue that supporting Verkle Trees should be prioritized over supporting small applications with limited TVL/PMF. However, to allow the community to brick a contract unexpectedly and without recourse would go against the core principles of Ethereum and present a series of existential questions about its core values. In the end, a compromise was found between protocol and core developers, but this debate only occurred because the EVM doesn’t have a versioning scheme.
A versioning scheme would allow EVM maintainers to write new rules for how the EVM should operate without worrying about it breaking old ones. When the EVM executes an opcode, it would first check whether the bytecode should be executed under legacy rules or EOF rules, and if EOF rules, which version. By adding versioning, contracts can be executed under rules depending on when they were deployed. This allows a contract developer to be confident that the rules their contract is executed with don’t change. If the code is valid at deployment, it should be valid forever and executed the same way, even if the network’s execution rules change for new contracts.

There are two ways to analyze what code does: Static and Dynamic analysis. Static analysis is performed on code that is not currently running, dynamic is the opposite. Dynamic tests are the kind most people associate with code testing because they involve coming up with test cases and executing the code directly to make sure it does what you want. Static analysis tools look at the raw bytecode and stitch together what it’s supposed to do without ever running it, which enables massive gains in speed on larger codebases over dynamic analysis. There is significant demand from the Ethereum developer community to build better static analysis tools for bytecode for research purposes, including MEV Bots, hackers, and highly optimized applications. One of the central goals of EOF is to aid in building static analysis tools by making it easier for bytecode to be analyzed.
Why is EVM bytecode hard to statically analyze?
EVM bytecode follows no structure. When you call a contract, the EVM part of the EL client loads the bytecode into memory and starts executing from the first instruction until it either reverts, runs out of gas, or returns successfully. This makes developers’ jobs very hard because if there’s no specific structure to the code, then looking for bugs quickly becomes a needle in a haystack. Some challenges include:
EOF seeks to impose a structure on bytecode to simplify analysis and EVM maintenance.

For example the opcode 0x5b, JUMPDEST.
JUMPDEST is an opcode. It’s called after a `JUMP` opcode. The JUMP opcode simply moves you around to different parts of the code. So if the program counter, which tracks where you are, is on instruction 50 and it needs to do something at instruction 150, the JUMP opcode is used to continue executing from this part of the code. However, the location that you jump to must have a valid JUMDPEST opcode at it. It is a security feature that was introduced by Ethereum after genesis. By ensuring that there’s a valid JUMPDEST opcode at the location, an attacker could not hijack the execution of a program. If an attacker managed to find a bug that allowed them to jump to whatever part of the code they wanted, they could basically do anything they wanted and begin executing parts of the code that they should not be able to. There must be specific JUMPDEST opcodes at certain locations, typically located around various authorization checks.
When looking directly at a bytecode, there are two ways in which this opcode may appear. The first is its intended purpose to indicate a JUMP destination. However, it is also possible to use that same data for something else. In human-readable ASCII, 0x5b is designated as the open-bracket “[“‘. Users may want to use that character in their code as a literal value. When statically analyzing code, it is quite difficult to discern whether a byte is being used as an opcode or as a piece of data. One of the mechanisms EOF uses to fix this is by separating code into distinct sections, isolating raw data from executable opcodes. This ensures there is no ambiguity on code analysis and makes it easier to deconstruct any raw data being used. Take the following examples:
0x…a5b1c4d908504a9c5ba4f8…
0x…a5b1c4d964504a9c5ba4f8… (1)
1: While this would be valid bytecode by the EVM as every opcode is technically valid, this code would not run without reverting, and has been arbitrarily created for the purposes of the example.
Can you tell which is the valid JUMPDEST? The second one uses the hex-value 0x5b, but since it’s preceded by a 0x64 (PUSH4), this means it shouldn’t be interpreted as a JUMPDEST. This quickly becomes very difficult to manage as bytecode size and complexity increases.

What is JUMPDEST analysis?
When a contract is called, the EVM performs JUMPDEST analysis. It goes through the bytecode of the contract and looks for every usage of the JUMPDEST (0x5b) opcode. It then compiles this information into a format retrievable later from all the valid JUMPDEST locations. When the EVM during execution encounters a jump, the code checks that the destination is a valid JUMPDEST from the analysis it did earlier. However, the issue comes from the fact that it has to be performed every time a contract is invoked. For a contract like USDC that may be called in every single block, that’s a lot of computation being needlessly expended. It’s also part of the reason why Ethereum contracts are limited in size.
Currently on mainnet, contracts are prohibited from being larger than ~24.5Kb in size. The contract size limit was decided a long time ago as part of a security solution to a potential denial-of-service attack on nodes. Many developers want to get rid of it, as they believe that, similar to gas limits, hardware has evolved to the point where nodes can handle an increase in size. Many developers oppose increasing the contract size limit as it may cause higher overhead for EL clients due to larger contracts taking longer to analyze. One of the central goals of EOF is to change the rules so that instead of doing JUMPDEST analysis every time a specific contract is loaded, it’s only done once at deployment. The structure of this actually enables any kind of code validation beyond just JUMPDEST at deployment time. Since this reduces the overhead of executing contracts, it paves the way for gas limit increases in the future as well.


So that sounds reasonable, but why can’t we just do that anyway without EOF if it’s causing so many issues
Since EOF is close to a dozen EIPs, the developers decided that they should all be introduced at once, especially as many rely on one another to function. If this change to code validation is implemented, this can have significant positive impacts for developers, the most important of which is eliminating the contract size limit. Developers hate the limit because it prevents contracts from being large enough to do everything they want to do. Most often, when developers write contracts that exceed the size limit, they are between a rock and a hard place. Conventional wisdom would say to just slim down the size and complexity of your contract to fit the limit, but developers don’t always think that way. Often, complexity can't be sufficiently reduced to fit within the limit. This led to the creation of the Diamond-Proxy standard (ERC-2535). Diamond proxy is an architectural design where the functionality of a contract is essentially split among several contracts, which each serve a specific purpose and interact with each other in part of the same application, because to combine it all into one file would make it too large to be deployed.
Diamond-Proxies can have benefits for contract upgradability but at the cost of security and gas. Several contracts communicating with each other introduce new configuration complexity that is susceptible to security vulnerabilities. Similarly, having multiple contracts deployed and configured increases both deployment and runtime costs from a series of external calls. It is also argued that eliminating the contract size limit would enable developers to build more complex and feature-rich smart contracts that are currently not feasible. Currently, the smart contract development community is united in their dislike of the contract size limit and want to remove it, but specific plans for its removal remain stalled due to issues surrounding exactly how to do so safely. Some have argued that the existing gas limits in place should prevent abuse of a no-limit system, but the exact merits of that argument are beyond the scope of this article. Various L1s & L2s have recently been experimenting with removing this themselves, such as MegaETH, Monad, and Starknet.
Anybody who has spent any amount of time writing Solidity code would already be familiar with the “stack too deep” error code. It occurs during compilation and is perhaps the single largest pain point of writing Solidity code to date. However, since Solidity has matured, there have been a variety of methods invented by developers to work around these limitations, rendering the error code more of an annoyance than an unsurpassable barrier to development. However, that also often results in code that is messier and less gas-efficient. One of the core EIPs as part of EOF is to replace SWAP1-16 and DUP1-16 with SWAPN and DUPN. These new opcodes would allow manipulating of a stack variable at any depth, eliminating the stack-too-deep error forever. Naturally, when this was announced, developers were very excited about the elimination of the error. However, client devs quickly pointed out that there were a variety of potential issues with JUMPDEST analysis that come from allowing more dynamic opcodes.
This has caused a significant debate between solidity developers, and the lower-level developers who implement the tools they rely on. Many have argued that instead of introducing new complexity to the EVM-level, the issue should instead be tackled at the compiler-level through optimizations. Both sides have valid points and there is no clear consensus.
EOF is NOT a “number go up” upgrade like danksharding or the merge. EOF is strictly a technical upgrade for the developers that make Ethereum run and drive the platform forward. There is no financial case for how EOF affects tokenomics, liquidity, usability, etc. that the Ethereum community tends to associate with price. This should not be considered a negative, but it is difficult to elucidate a case for the short-term benefit to Ethereum’s price. However, if one is to take a longer-term view of Ethereum, then the value proposition becomes somewhat clearer.
EOF is important because it enhances developer tooling. However, eliminating the stack-too-deep error is not a benefit that end-users will notice, but the overwhelming majority of developers desire. Implementing EOF improves the security of smart contracts and can be beneficial in preventing hacks. Raising contract size limits can enable new applications currently constrained by size restrictions. Static relative jumps shorten development time and lower gas cost. Code validation enables future EVM upgrades in less time with better backwards compatibility. These are all things that trickle down to end-users later, in the form of better applications, shorter development times, lower gas costs, and higher security. Similarly, despite the criticism of Ethereum’s development time, EOF getting to this point should nonetheless be seen as a positive for the long-term stability of the network given the overwhelming complexity and difficulty in implementation.
The benefits of EOF are difficult to explain to the layman. The merge, danksharding, and EIP-1559 all have easy-to-understand value propositions for users and provide good narratives for Ethereum. EOF does not in the same way account abstraction does. There are significantly harder challenges that still need to be solved, and the continued prioritization of EOF could potentially crowd out progress on these. It can also be argued that depending on the issue, the work done on EOF may potentially make their implementation easier by knocking down roadblocks that would only show up later on in the process if nothing was done, similar to SELFDESTRUCT’s deprecation. This cannot be completely known now, and it is equally likely that EOF could have the opposite effect, causing more issues down the road.
The debate over EOF is rare because it appears that the developer community is not in alignment over whether or not it should go into effect. Twitter discourse as well as the Ethereum-Magicians Forums have been in very heated debate over the subject, and it is not clear which side has more support. Client developers have valid concerns about the additional technical debt that they may be incurring. There are also ideological debates about the simplicity of Ethereum as a base layer. Many argue that the L1 should remain as simple as possible, and that the addition of excess rules and bytecode formats present an affront to some of the ideological principles Ethereum is founded on. These arguments are typically coupled with debates about protocol-ossification, of which Bitcoin’s own community should be studied as a relevant analogue.
EOF does, however, potentially introduce new technical debt for client teams. Given that the EVM operates at the execution layer, keeping the codebase simple can enhance development time and security while lowering computational overhead. This upgrade could have the opposite effect and create an astounding amount of new complexity in the codebase. This comes from having to maintain two versions of an already complex project. It’s not difficult to foresee how additional rulesets can become an increasingly sprawling mess of branches in the code. Every new rule becomes one more piece of code that must be maintained in the future and is a liability.
Many chains in the crypto industry have attempted to eschew the EVM in recent years. This is coupled with their insistence that a different architecture for decentralized computation can enhance scalability. These include Solana, Aptos, and Sui. These competitors do claim the ability to achieve higher levels of throughput under their alternative models. On the other side, some argue that given the EVM’s continued dominance and large lead on the competition, major structural changes are not needed to compete in the marketplace. Chains and L2s such as Fuel argue that rather than EOF, strategies such as parallel computation can result in the necessary efficiency gains to achieve sufficient scalability.
One of the largest concerns among Solidity developers is concerns around how it interacts with the rollup-centric roadmap. Not all L2s are equivalent to the L1. When new features are introduced on mainnet, there is often significant lag time between then and when those features are implemented on an L2. This is why you often hear about EVM-compatibility vs. EVM equivalence. When a new change is made on the L1 it must eventually be implemented on all layer-twos.
As tools for deployment improve, more protocols have embraced deploying on as many chains as possible, as it’s quite easy. For many chains this means simply changing an RPC URL and hitting deploy. But for other chains that do not have exact feature parity this can be a real issue. If the target chain does not support something, then application developers may be forced to recompile and maintain multiple separate versions of bytecode based on the chain. This is a frequent pain point for multi-chain developers in the status quo over existing functionality like Create2 and Transient Storage. Developers at rollups such as Optimism have expressed support for implementing EOF on OP if integrated on mainnet, but this is by no means a perfect solution and involves additional trust assumptions.
While scalability is not a primary goal of EOF’s implementation, one additional benefit to it would be decreases in the proving time for the generation of Zero-Knowledge proofs over EVM execution. Since EOF decreases the number of possible execution paths and adds additional structure to bytecode, ZK-proof generation can progress significantly. As illustrated by the chart below, this results in smaller proof sizes and faster generation over arbitrary computation in the EVM. Currently, many bottlenecks still exist for the computationally expensive generation of ZK-proofs, which prevent their enshrinement into various parts of the Ethereum-stack. Completion and deployment of EOF thus stands to help accelerate the development and implementation of Zero-Knowledge cryptography through Ethereum. This would have significant long-term benefits to the ecosystem, such as enhancing privacy-preserving technology, shortening of block-times, interoperability between L2’s, and complete enshrinement of ZK-proofs into block verification. However, Zero-Knowledge cryptography remains in its infancy, and it is possible that such improvements may occur inevitably due to increasingly rapid improvements in the general engineering of zero-knowledge-proof generation.

Not all objections from core devs are over EOF’s existence, but rather the process of implementation. Fusaka already plans to ship with a base implementation of data-availability-sampling, which will increase network scalability by several orders of magnitude. It is argued by developers that Fusaka should not be expanded in scope. Furthermore, proceeding with EOF potentially crowds out available resources for the remaining work for larger-impact upgrades like the verge and the purge. One option is to have L2s implement EOF first to gauge the potential impact and perform more analysis. Since many L2 sequencers use modified versions of Geth, or other clients which have already implemented EOF, this is a viable option. This would potentially be a less damaging choice as years of work have already gone into getting EOF to its current status, and implementation on L2s would prevent this work from being wasted. Few are arguing for it to be completely scrapped, but without a clear path to inclusion, the future of EOF would become uncertain.
This entire debate, while simmering below the surface, only really exploded recently when Solidity 0.8.29 was announced, which focused on preliminary support for EOF. The Solidity compiler team has announced their support for EOF recently as well. As of writing, Geth, which makes up ~43% of EL nodes, has come out against EOF in Fusaka, while the second largest, Nethermind (36%) has come out in favor. Notable researchers Dankrad Feist(2) and pcaversaccio argue adamantly against EOF with Tim Beiko and Vitalik in favor. This has also ignited further debate about the decision-making authority of various client developers and teams, specifically around potential veto-powers for EL upgrades. As of March 25, 2025 the team behind Besu Client and
2: The researcher behind the appropriately named: “Danksharding” scaling protocol for Ethereum
There is also a series of potential options for modifying the scope of EOF to be more limited and remove some of the more contentious issues. As it stands, one of the most contentious issues surrounding EOF concerns the significant amount of breaking-changes to application-development on Ethereum, and how developers must adapt. Many of the options and debate revolve around how best to manage this and minimize roadblocks for developers. This includes allowing both legacy and EOF contracts to be deployed and leaving the choice to the developer which version to use. It has been agreed by the core developers that EOF should only be excluded if it would result in a delay of PeerDAS shipping as well.
The debate over EOF is ongoing, and new information is being released constantly. However, it should also be noted that this spirited debate should be seen as a positive for the community. Despite the heated conversation, many prominent researchers and developers have entered the debate because they want to ensure the best possible future for Ethereum’s technological superiority. EOF is not being built in the shadows, and the debate is playing out in public with some of Ethereum’s best and brightest. There has not been a deference to authority, and many of the community’s most venerated developers are on opposite sides of the issue. What ultimately ends up happening next is unknown, but there should still be confidence that whatever ends up happening will be the best course of action, and not the result of an unelected and unaccountable shadow group of Ethereum’s leaders.
While EOF remains controversial among developers, it represents a significant milestone in the maturity of Ethereum. The initial design of the Ethereum Virtual machine was intended to be simple because the scope of what was possible with smart contracts was limited. As time goes on, and as the crypto development ecosystem evolves, EOF is another indicator of how Ethereum must grow with it. EOF has the potential to improve the development experience and position Ethereum to lead the pack in the next generation of smart contract development, or it could have little effect at all. The effects of EOF may not be felt right away, and only time will tell whether or not the impact is worthwhile. Every item on the Ethereum roadmap thus far and into the future was meant to improve Ethereum in some way, and EOF is no different, this time targeting shortcomings of the execution-layer, just as the merge targeted the consensus layer.
Controversy comes with the territory — even The Merge drew opposition. However, the mere act of shipping such a complicated upgrade, championed by some of Ethereum’s most dedicated, highlights its continuing relevance in the increasingly-competitive L1 market. Regardless of the impact, the perseverance and dedication of the development community in iterating and prioritizing improvement of the technology over short-term profiteering should be a sign that Ethereum is on the right track and poised to succeed well into the future.
This article attempts to be as accurate as possible, but due to the highly fluid and constantly evolving nature of EOF some information may be outdated or incorrect. If you find any errors or have feedback, please reach out to the author directly.
No comments yet