
How to onboard the next million startups and SME's into Ethereum

Cómo incorporar a los próximos millones de startups y MIPYMES a Ethereum
Un vistazo a cómo estructuras ligeras como las MicroDAOs pueden impulsar adopción masiva en Ethereum

OP City: Preview
In recent years, the Ethereum ecosystem has faced significant scalability challenges, driving the need for innovative solutions that can optimize operation costs while maintaining the integrity and decentralization of the network. Among these solutions, the OP Stack and Canon Fault Proofs Virtual Machine (VM) have become critical components in the ongoing efforts to enhance the performance and efficiency of Ethereum Layer 2 rollups. In this context, the OPcity stack delves into the theoretica...
Laboratorio de I+D+I en bienes públicos urbanos

How to onboard the next million startups and SME's into Ethereum

Cómo incorporar a los próximos millones de startups y MIPYMES a Ethereum
Un vistazo a cómo estructuras ligeras como las MicroDAOs pueden impulsar adopción masiva en Ethereum

OP City: Preview
In recent years, the Ethereum ecosystem has faced significant scalability challenges, driving the need for innovative solutions that can optimize operation costs while maintaining the integrity and decentralization of the network. Among these solutions, the OP Stack and Canon Fault Proofs Virtual Machine (VM) have become critical components in the ongoing efforts to enhance the performance and efficiency of Ethereum Layer 2 rollups. In this context, the OPcity stack delves into the theoretica...
Laboratorio de I+D+I en bienes públicos urbanos

Subscribe to zenbit.eth

Subscribe to zenbit.eth
Share Dialog
Share Dialog


<100 subscribers
<100 subscribers
Ethereum has long envisioned itself as the World Computer—a shared global platform for registering and verifying the state of public goods, collective decisions, and decentralized markets. By embedding trust directly into code and consensus, Ethereum enables a new kind of programmable legitimacy. Yet for these values to reach governments, organizations, and everyday people, the technology must scale.
Layer 2 (L2) rollups have emerged as a practical solution to Ethereum’s scalability challenges, addressing issues such as cost, capacity, and transaction throughput. Among these, optimistic rollups allow transactions to be executed off-chain while posting minimal data on-chain, thereby preserving decentralization and network security.
The OP Stack, developed by OP Labs and coordinated by the Optimism Foundation, is one of the most widely adopted open-source frameworks for deploying dedicated L2 rollups. Its minimal, and modular architecture allows chains to post periodic state roots to Ethereum (or alternative L1s), relying on optimistic validation. These state roots can be challenged within a seven-day window using Fault Proof mechanisms. A robust and permissionless implementation of these mechanisms is essential to achieving Stage 1 requirements in L2BEAT’s state validation maturity framework.
This research report examines the evolution and implementation of fault proof systems in the OP Stack. It builds on our earlier work, OP City: Research and Optimization of OP Stack Deployments and Canon Fault Proofs VM, and presents findings across three milestones: (1) a comprehensive review of OP Stack upgrades and their impact for fault proofs; (2) a technical analysis of emerging mechanisms like Canon FPVM, opML, oppAI, and DAVE; and (3) a comparative benchmark to assess readiness, performance, and integration paths. Together, these contributions aim to advance the decentralization and resilience of the OP Stack—and, by extension, Ethereum as a truly global computational engine.
This milestone also included a version benchmark of the OP Stack, conducted through a dual-node deployment to evaluate the performance of rollups built from different configurations and protocol versions. The goal was to compare their behavior under equivalent conditions and identify meaningful differences in efficiency and execution. The results of this test were documented in our previous Mirror article, and the full research process, including setup and data, is available in our GitHub repository.
Before the OP Stack was formalized as a unified, modular framework, its architecture evolved through iterative stages that laid the groundwork for scalable Layer 2 execution. The journey began with the Unipig Demo in October 2019, a proof-of-concept developed in collaboration with Uniswap that showcased the power of Optimistic Rollups. It demonstrated a ~10× increase in throughput and significant reductions in gas fees compared to Ethereum mainnet, proving the concept’s viability.
This was followed by the lauch of SNX Testnet in September 2020, which introduced general-purpose smart contract execution on Layer 2. Transactions ran at approximately ~10% of L1’s cost, with improved latency and faster confirmation times.
In January 2021, the network transitioned into its first mainnet phase—often referred to as the OVM Era—with a version of the Optimistic Virtual Machine that relied on a custom Solidity transpiler. While functional, this setup introduced significant inefficiencies: over 25,000 lines of bespoke code, complex tooling, and inflated state transition costs.
By October 2021, Optimism rolled out the EVM Equivalence Upgrade, a major step forward that removed the transpiler, drastically simplified the execution layer, and brought the network into full compatibility with Ethereum tooling. Throughput increased to ~100 TPS, and developer experience improved significantly. Just two months later, in December 2021, the launch of OP mainnet enable public contract deployment and catalyzing adoption by removing whitelists. This major achievment on open access marked a turning point in Optimism’s growth.
The culmination of these early experiments arrived with the Bedrock Upgrade in june 2023, which replaced legacy OVM components with a leaner, modular, and Ethereum-equivalent design. Bedrock slashed code complexity by ~90%, cut transaction costs by ~30%, and boosted throughput to ~450 TPS—ushering in the era of the OP Stack and laying the foundation for a Superchain of standardized, interoperable rollups.

All referenced documents are available in our open repository for transparency and further review: 🔗 OP Stack Protocol Upgrades Review – Zenbit GitHub
Since the formalization of the OP Stack, the protocol has undergone 15 official upgrades, ranging from foundational architectural transitions to fine-grained feature releases across the OP Stack’s modular layers. Our review spanned 62 curated sources, including governance forum proposals, developer documentation, audit reports, and OP Stack specification entries. These materials were analyzed to trace the evolution of the OP Stack’s dispute system—from the launch of permissionless Cannon-based fault proofs in Protocol Upgrade #7, to the infrastructure pre-requisites for multi-threaded MIPS64 fault games in Upgrades #14 and #15. Sources such as governance threads, public audit reports, and design documents not only clarified technical intent, but also allowed us to assess how each upgrade contributed to meeting Stage 1 decentralization criteria from the L2BEAT framework.
Through this analysis, we identified 13 upgrades with direct impact on the settlement layer, including 8 upgrades that significantly advanced the fault proof system —transforming it from a trusted fallback into a fully modular, permissionless verification layer.
Superchain Config laid the groundwork for coordinated multi-chain security by introducing a global SuperchainConfig contract on L1. This enabled the Security Council to pause cross-chain operations via an expanded emergency mechanism, extending control beyond withdrawals to include message relays and token bridges. It also included critical fixes—such as closing reentrancy attack vectors—and enhanced L2 token compatibility through deterministic CREATE2 deployments. While not altering fault proof logic directly, this upgrade provided a crucial fallback security layer for fault proof systems, strengthening the OP Stack’s operational resilience.
The state validation improvements began with Fault Proofs, which marked a foundational shift in the OP Stack’s security model by replacing the previously trusted state root proposer mechanism with a permissionless fault proof system. This upgrade introduced a modular, on-chain binary bisection game powered by the Cannon VM, allowing any participant to propose or challenge state roots. Supporting this dispute flow, key components such as the DisputeGameFactory, AnchorStateRegistry, and bonding mechanisms were deployed. Together, these features enabled the OP Stack to meet the Stage 1 decentralization milestone set by L2BEAT, especially through the integration of a Security Council override to pause withdrawals in emergencies.
Guardian introduced formalized guardian governance. This upgrade extended the Security Council’s role by granting them authority over L2 ProxyAdmin control and emergency withdrawal pausing, providing a critical safeguard during fault proof escalation periods. It was a vital step in building the governance infrastructure required for fault proof decentralization, ensuring that challenge games could proceed trustlessly without introducing systemic risk.
Granite focused on enhancing the protocol infrastructure to support more advanced proof systems. Updates to SystemConfig and dispute game interfaces increased flexibility for challenger clients and made space for introducing alternative VMs, such as MIPS64. These changes future-proofed the OP Stack’s dispute layer, ensuring compatibility with evolving fault proof implementations such MT-Cannon and potential zk variants.
As fault proofs moved into production, Holocene standardized the format for L2-to-L1 outputs, refining how withdrawals and challenge games would be validated across chains. These improvements enhanced compatibility with new proof types and helped streamline the verification logic for post-upgrade withdrawals, an essential update as the system transitioned away from trusted third parties.
In preparation for Ethereum’s Pectra hard fork, Pre-Pectra Readiness introduced important features to support hybrid and forward-compatible proofs. This included access to the L1 Beacon Root, enabling withdrawal verification against Ethereum’s consensus layer, and introduced precompiles like BLS and timestamp access that are critical for next-gen dispute VMs. These changes ensured that fault proofs could continue to operate in a post-Dencun Ethereum environment and opened the door to zk-fault proof hybrids.
This set the stage for Pre-Isthmus, which established the L1 infrastructure for MT-Cannon, a multithreaded, 64-bit evolution of the Cannon VM. Without requiring a hard fork, this upgrade deployed new contracts like OPChainManagerV2, deployed MIPS64.sol, and restructured the fault game logic to support more scalable dispute execution. MT-Cannon could now be tested in production environments in parallel with legacy Cannon, serving as a trial run for the more performant and deterministic VM architecture.
Finally, Isthmus completed the transition by activating MT-Cannon as the canonical fault proof VM. This hard fork integrated Pectra-compatible features, such as support for new precompiles and the inclusion of withdrawalsRoot in L2 block headers, solidifying the OP Stack’s ability to anchor dispute data efficiently. Additionally, it also introduced operator fee fields and formally recognized MIPS64 in dispute logic, ensuring that fault proofs could now run faster, more securely, and with greater parallelism.

A critical component of the OP Stack’s security model is its fault-proof mechanism, which ensures the validity of Layer 2 state transitions through fraud detection rather than pre-execution verification. The initial implementation featured a monolithic Cannon-based fault proof system, which was later restructured to enhance modularity and reduce reliance on Optimism-specific execution logic.
Key developments in fault proofs include:
Cannon's Optimized Proof System: Introduced an approach where the execution client compiles directly into the proof system, simplifying the verification process⁵.
Multi-Client Fault Proofs: A strategic shift toward supporting multiple fault-proof implementations, increasing security resilience and minimizing the risks of single-client reliance⁶.
Introduction of Stage 1 Decentralization: The Guardian upgrade improved security council threshold mechanisms, decentralizing the governance of fault proofs⁷.
Settlement Layer Refinement: Modular proof verification was introduced, allowing future upgrades to transition toward ZK-enabled rollups without disrupting OP Stack’s core execution model⁸.
Modular Fraud Proofs Architecture: A long-term goal of OP Stack fault proofs is modular dispute resolution, allowing new execution clients to integrate their own fraud-proof mechanisms
All referenced documents are available in our open repository for transparency and further review: 🔗 FP research reports – Zenbit GitHub
Fault proofs are a foundational component of optimistic rollups, enabling trustless verification of off-chain state transitions. As the OP Stack advances toward greater decentralization and modularity, the architecture and implementation of fault proof mechanisms have evolved significantly. This milestone documents our research into the key architectures shaping this landscape—beginning with Optimism’s default Canon FPVM, and expanding into cutting-edge systems like opML, oppAI, and Cartesi’s DAVE. Each mechanism offers unique trade-offs between performance, verifiability, and scalability, collectively enriching the design space for secure and efficient Layer 2 systems.
At the core of the OP Stack’s security model lies its Fault Proof system, a cryptoeconomic mechanism that ensures the correctness of off-chain state transitions by enabling anyone to challenge invalid claims posted to Ethereum. This milestone focused on analyzing the evolution of this mechanism, beginning with Optimism’s Canon Fault Proof Virtual Machine (Canon FPVM)—the default system currently securing Layer 2 outputs.
Canon employs a dispute resolution game, where participants engage in an interactive bisection protocol to isolate a single disputed MIPS instruction. The system combines onchain and offchain components: MIPS.sol, a lightweight smart contract that deterministically executes a single instruction, and Cannon, a Go-based offchain VM that computes the state transition trace and generates verifiable proofs. Key elements such as the DisputeGameFactory, AnchorStateRegistry, and OP-Challenger infrastructure coordinate to handle challenges, execute proofs, and enforce results. Memory in Cannon is modeled as a Merkleized 32-bit address space, enabling stateless onchain verification of computation with minimal inputs.
This architecture was tested in production during the release of feature-complete fault proofs on OP Sepolia, and subsequently evolved through multiple protocol upgrades to enhance modularity, precision, and compatibility with Ethereum’s Pectra roadmap. Our technical deep dive revealed strengths—such as deterministic execution, statelessness, and modular game design—but also current limitations, including limited syscall support and high computational overhead.
One of the most forward-looking proposals in fault proof innovation is opML, an optimistic verification system designed to support scalable and verifiable onchain machine learning (ML) inference. Inspired by optimistic rollups, opML replaces expensive zero-knowledge proofs with an interactive fraud-proof process, allowing ML computations to be challenged only when disputed. This approach significantly reduces overhead, making it feasible to run complex models onchain without compromising trust. At its core, opML features a Fraud Proof Virtual Machine (FPVM) optimized for ML workloads, equipped with Merkleized memory and a segmented layout that separates code, input/output buffers, oracle data, and model parameters. This structure supports a full 32-bit address space and deterministic execution, essential for cryptographic verifiability.
The opML architecture consists of four integrated components: the FPVM, a dual-compilation Machine Learning Engine (MLE), an interactive dispute game, and the protocol layer. The MLE is particularly innovative, compiling the same ML source code into two targets: one optimized for high-speed native execution (with GPU/CUDA support), and another for verifiable execution on the FPVM, using fixed-point arithmetic to preserve determinism. With a typical use, trusted actors submit computation results offchain, while verifiers can trigger a two-phase interactive dispute. The first phase isolates disputed computation nodes at the graph level with semi-native execution, while the second drills down into low-level FPVM instructions to prove or disprove correctness.
This multi-phase bisection protocol improves upon Optimism’s single-phase approach by tailoring fraud proof resolution to ML-specific workflows, striking a balance between performance and trust minimization. Disputes are resolved onchain via MIPS-like instructions executed within the FPVM, with proofs derived offchain using opML’s Merkleized memory model. Security is guaranteed through an AnyTrust model, where only one honest validator is needed to contest false claims. Submitters and challengers are incentivized with crypto-economic stakes and penalties to ensure honest behavior.
To further enhance scalability, opML incorporates performance optimizations like lazy loading—loading only required model segments into memory—and semi-native execution, where only disputed segments revert to verifiable computation. This approach enables opML to support large-scale models (e.g. 7B LLaMA) while maintaining verifiability when needed. On the protocol layer, smart contract interfaces manage submission, challenge periods, and dispute resolution workflows, enabling seamless integration with onchain systems. While opML still faces constraints—such as fixed finality windows and memory limitations—it presents a robust pathway for integrating verifiable ML into rollup environments, aligning with the broader goals of modular and decentralized computation.
The oppAI framework introduces a novel hybrid architecture that merges the computational efficiency of Optimistic Machine Learning (opML) with the strong privacy guarantees of Zero-Knowledge Machine Learning (zkML). Designed to enable secure, verifiable, and privacy-preserving AI inference on blockchain systems, oppAI addresses the core tradeoff in onchain AI: achieving both performance and confidentiality. By allowing developers to selectively partition AI models into components executed via optimistic or zero-knowledge verification, oppAI provides a flexible system tailored to diverse privacy and cost constraints.
At the core of oppAI lies a dual execution model. Non-sensitive or performance-critical model components are processed via opML’s Fraud Proof Virtual Machine (FPVM), while sensitive logic or proprietary model weights are processed under zkML, using zero-knowledge proofs such as zk-SNARKs. This architecture enables the same AI model to achieve high performance where needed, and high privacy where necessary. The execution workflow begins with a partitioned AI model—e.g., f₁ᵒᵖ, f₁ᶻᵏ, ..., fₙᵒᵖ, fₙᶻᵏ—where each submodel is designated for either optimistic or zk execution. After processing, the results are submitted to an onchain verifier, and a challenge-response game can be initiated if needed.
The oppAI system architecture builds on opML’s foundation and extends it with additional zkML capabilities. It includes components such as:
The FPVM (adapted from opML) for step-by-step verifiability in optimistic disputes;
A dual-compiled Machine Learning Engine, supporting both native and FPVM-compatible execution;
An interactive dispute resolution game for verifying opML results;
A zkML Prover that generates ZKPs for private components, alongside an onchain Verifier contract to check them.
This design offers direct applications whithin fault proof systems like those used in the OP Stack. First, oppAI enables privacy-preserving fault proofs, allowing sensitive data (e.g. user inputs or proprietary models) to be verified without full exposure. Second, it proposes a hybrid verification scheme where only the most privacy-sensitive operations incur zk costs, while the rest benefit from lower overhead optimistic execution. This creates a more efficient yet secure fraud proof system, especially valuable for increasingly complex computations such as AI oracles or ML-based coordination mechanisms in DAOs and L2 governance.
From a security standpoint, oppAI adopts an AnyTrust model, requiring only a single honest verifier to guarantee system integrity. Its economic security model disincentivizes attacks through dynamic inference pricing, calculated to keep the cost of reconstructing private models prohibitively high. Notably, the system allows tuning the ratio between zkML and opML (represented by p) to balance the tradeoff between privacy and efficiency. Performance benchmarks confirm that reducing zkML usage (lower p) significantly lowers computational overhead while still enabling verification of critical model segments like attention layers in large language models (e.g. Stable Diffusion, LLaMA).
In practical terms, oppAI’s capabilities significantly the scope of OP Stack-based chains by allowing:
Secure AI oracles, which deliver verifiable predictions to smart contracts without revealing internal logic;
Privacy-enhanced onchain inference, protecting model IP while maintaining fraud-proof guarantees;
Support for high-complexity computation, including federated ML, game theory agents, and dynamic governance models.
Despite its advantages, oppAI introduces new design and implementation challenges. Determining the optimal partitioning of model components require privacy demands interdisciplinary knowledge of ML and blockchain. Furthermore, managing performance tradeoffs and adapting to rapidly evolving zkML and opML ecosystems require continual iteration. Nonetheless, oppAI sets a strong precedent for modular, privacy-aware fault proof design—bridging efficient onchain computation with the next generation of decentralized AI applications.
DAVE (a name, not an acronym) is Cartesi’s advanced fraud-proof protocol, designed to strengthen decentralization, liveness, and efficiency in blockchain verification. It introduces a permissionless, tournament-style dispute mechanism, offering a compelling alternative to Optimism’s Fault Proofs and Arbitrum’s BoLD.
At its core, DAVE leverages Permissionless Refereed Tournaments (PRT)—a novel framework resitant to Sybil attacks, with constant hardware and bond requirements, and provides exponential resource advantages for honest actors. This design allows that a single honest validator can defend the network—even against coordinated adversaries—at minimal cost.
DAVE operates on top the Cartesi Machine, a deterministic RISC-V emulator that separates execution logic into two layers: a micro-architecture in Solidity and a high-performance offchain computation layer. This dual-layer architecture ensures execution integrity while supporting instructions without inflating the onchain footprint.
Unlike traditional fault proof systems, DAVE employs a tournament-style challenge mechanism, where validators resolve disputes through logarithmically-scaling games. These disputes typically conclude within 2–5 challenge periods, significantly reducing the overhead seen in earlier systems like Cartesi’s own PRT (which could take up to 20 weeks).
Compared to other systems, DAVE offers notable advantages:
Versus Optimism’s OPFP, it provides stronger Sybil resistance, faster dispute resolution, and lower validator costs.
Compared to Arbitrum’s BoLD, DAVE maintains similar security guarantees significantly reduced bond requirements (~3 ETH vs. 3,600 ETH).
Internally, it improves liveness and reduces delay over Cartesi’s prior models while keeping decentralization intact.
DAVE’s flexible architecture allows it to serve both rollups with continuous input processing and compute-focused systems. While it currently relies on the Cartesi Machine, its execution environment agnosticism opens paths for integration with other rollup ecosystems.
All referenced documents are available in our open repository for transparency and further review: 🔗 FP research reports – Zenbit GitHub
Originally titled “2. Research on oppAI compatibility with the OP Stack,” this milestone was later renamed to “2. FPVM Comparative Analysis” to better reflect the broader scope of our investigation. While the initial focus centered on oppAI and opML, our research expanded to include a full evaluation of Optimism’s Canon FPVM and Cartesi’s DAVE—both foundational and alternative approaches to fault proof architecture. This milestone presents a structured comparison of these four mechanisms, each tackling a different facet of the fault proof challenge, from deterministic execution and computational efficiency to privacy-preserving inference and decentralized dispute resolution.
Our primary objective is to analyze the tradeoffs in architecture, performance, security, and accessibility, and to propose integration pathways that strengthen the OP Stack’s fault proof system. This comparative research is intended not only as a conceptual evaluation but as a blueprint for the OPcity prototype, guiding future technical contributions to the OP Stack and aligning with the broader mission of building decentralized, modular, and verifiable infrastructure.
These systems represent three main verification paradigms. Optimism’s Canon FPVM relies on deterministic execution using a lightweight MIPS VM and binary bisection games. opML enhances this by integrating ML-based prediction to streamline verification. oppAI extends opML by adding selective zkML to protect privacy-sensitive components. Meanwhile, Cartesi’s DAVE diverges with a tournament-based architecture running on a full RISC-V emulator, prioritizing permissionless participation and Sybil resistance.
1. Execution Environment:
Optimism uses a MIPS-based VM for its simplicity and determinism
Cartesi employs a more powerful RISC-V emulator with two-layer implementation
opML works within existing VMs but adds ML capabilities
oppAI partitions execution between zkML and opML components
2. Verification Approach:
Optimism uses a traditional bisection protocol to narrow disputes
opML enhances verification with machine learning predictions
oppAI selectively applies zero-knowledge proofs for privacy-critical components
DAVE implements tournament-style verification to improve decentralization
3. State Representation:
All systems use Merkle trees for state representation, but with different implementations
Optimism and Cartesi focus on complete state transitions
opML and oppAI allow for partial verification of specific components

While Canon FPVM sets a reliable baseline, its simplicity constrains scalability. opML reduces computational load via ML-enhanced execution, offering performance improvements without altering core architecture. oppAI introduces more variable overhead due to zkML, but allows developers to balance privacy and efficiency by configuring which sub-models require zero-knowledge proofs. DAVE’s structure ensures that honest participants incur low costs even in worst-case scenarios, making it well-suited for decentralized validator environments.
1. Computational Requirements:
Optimism's Canon requires full execution of disputed transactions
opML reduces computation through predictive models
oppAI optimizes by applying heavy computation only to privacy-sensitive parts
DAVE minimizes honest validator costs regardless of adversary resources
2. Verification Time:
Optimism's verification time scales with computation complexity
opML potentially accelerates verification through ML predictions
oppAI's verification time varies based on privacy requirements
DAVE guarantees resolution in 2-5 challenge periods regardless of adversary count
3. Resource Asymmetry:
DAVE provides the strongest resource asymmetry, giving honest validators exponential advantage
opML offers efficiency advantages through prediction
oppAI balances resources based on privacy needs
Optimism requires similar resources for all participants
All systems adhere to a 1-of-N honesty assumption—where a single honest verifier can enforce correctness. Canon and opML rely on traditional bond-based incentives. oppAI innovates by making attack costs dynamic, based on the proportion of the model using zkML. DAVE provides the most robust Sybil resistance and censorship resilience, minimizing reliance on high bonds by leveraging logarithmic dispute scaling and permissionless participation.
Adversary Models:
All systems use a 1-of-N security model where a single honest validator can enforce correctness
DAVE specifically addresses Sybil attacks through its tournament structure
oppAI adds privacy considerations to the security model
opML potentially improves detection of sophisticated attacks
Economic Security:
Optimism relies on bonds to ensure honest behavior
DAVE minimizes bond requirements while maintaining security
oppAI introduces variable costs based on privacy needs
opML maintains the economic model of its host system
Censorship Resistance:
DAVE explicitly addresses censorship, requiring censorship for more than one challenge period to break consensus
Other systems have less explicit censorship resistance guarantees
Cartesi’s DAVE leads in terms of decentralization, with its low capital requirements and open validator access encouraging broader participation. Canon maintains moderate requirements, while opML and oppAI, despite their technical innovation, face participation challenges due to higher complexity and expertise requirements—particularly in machine learning and zero-knowledge systems.
Capital Requirements:
DAVE explicitly minimizes capital requirements (3 ETH vs. 3,600 ETH for Arbitrum's BoLD)
Optimism has moderate requirements
oppAI and opML inherit requirements from their base systems
Technical Barriers:
opML and oppAI introduce additional complexity through ML components
DAVE and Optimism have more straightforward verification processes
All systems require some technical expertise to participate effectively
Validator Diversity:
DAVE's low capital requirements potentially enable greater validator diversity
ML-based approaches might limit participation to those with ML expertise
All systems benefit from diverse validator sets for security
Several integration pathways could merge the best features of these mechanisms into a next-generation fault proof system. For example, pairing Canon FPVM with opML’s prediction layer could optimize dispute prioritization without modifying the core VM. Adding oppAI’s privacy-preserving zkML submodules would extend Canon’s use cases to sensitive applications like AI oracles. Most transformative would be adopting DAVE’s tournament-style dispute resolution, enabling permissionless participation with lower bonds and faster resolution—making Canon-based fault proofs more robust and accessible.
More ambitious would be a multi-system integration—combining Canon’s deterministic foundation, opML’s efficiency, oppAI’s privacy, and DAVE’s decentralization. While technically demanding, this could yield a modular fault proof architecture tailored for emerging rollup demands, from governance security to machine-learning-powered agents
Potential Integration:
Enhance Canon FPVM with ML-based prediction for faster verification
Use ML to identify likely dispute points before full verification
Maintain Canon's deterministic execution while improving efficiency
Implementation Approach:
Add ML prediction layer to Canon without modifying core execution
Train models on historical dispute patterns
Use predictions to prioritize verification steps
Benefits:
Faster dispute resolution
Reduced computational overhead
Maintained security guarantees
Potential Integration:
Add privacy capabilities to Canon through selective zkML
Partition sensitive computations for privacy protection
Maintain efficiency for non-sensitive operations
Implementation Approach:
Implement model partitioning within Canon
Add zkML verification for privacy-critical components
Develop clear interfaces between zkML and opML parts
Benefits:
Enhanced privacy for sensitive operations
Maintained efficiency for standard operations
New use cases for private computation
Potential Integration:
Adopt DAVE's tournament approach for Canon's dispute resolution
Maintain Canon's execution environment while improving the challenge mechanism
Reduce capital requirements for participation
Implementation Approach:
Implement tournament-style challenges within Optimism's framework
Adapt Canon to work with logarithmic dispute resolution
Maintain compatibility with existing Optimism deployments
Benefits:
Improved decentralization through lower capital requirements
Enhanced resistance to Sybil attacks
Faster dispute resolution
Comprehensive Approach:
Combine elements from all four systems for a next-generation fault proof system
Use Canon's deterministic execution as the foundation
Enhance with opML's efficiency improvements
Add oppAI's privacy capabilities for sensitive operations
Implement DAVE's tournament structure for dispute resolution
Implementation Challenges:
Complexity of combining multiple novel approaches
Ensuring security guarantees are maintained
Managing the increased technical barriers to participation
Potential Benefits:
Best-in-class performance across all metrics
Support for diverse use cases including privacy-sensitive applications
Improved accessibility and decentralization
While Canon FPVM provides the deterministic foundation for state validation in the OP Stack, opML introduces machine learning-driven optimizations that reduce overhead and improve execution speed. oppAI extends this with a hybrid zkML/opML approach that brings selective privacy to verifiable AI computations, and DAVE’s tournament-based protocol raises the bar for decentralization, liveness, and Sybil resistance.
Rather than selecting a single solution, our analysis supports a modular integration of these technologies as the optimal path forward. By synthesizing their complementary strengths and mitigating their individual tradeoffs, we envision a next-generation fault proof system that is efficient, private, accessible, and secure—one capable of meeting the evolving needs of Ethereum rollups and complex decentralized applications.
This vision forms the design framework for the OPcity prototype, a research-driven implementation of an advanced fault proof system built on the OP Stack. Our blueprint is structured across four interoperable layers:
1. Execution Layer: Modified Canon FPVM with RISC-V compatibility
Deterministic execution environment based on Canon FPVM
Extended instruction set supporting both MIPS and RISC-V operations
Unified state representation using enhanced Merkle trees
Compatibility layer for existing applications
2. Intelligence Layer: ML-enhanced prediction and optimization engine
Predictive models for identifying potential dispute points
Optimization algorithms for efficient execution paths
Lightweight DNN library for in-VM machine learning
Training framework for continuous improvement
3. Privacy Layer: Selective zero-knowledge proof system
Model partitioning framework for separating privacy-sensitive components
Zero-knowledge proof generation for protected components
Efficient verification of ZK proofs on-chain
Economic security model for privacy protection
4. Verification Layer: Tournament-based dispute resolution system
Permissionless tournament structure for dispute resolution
Logarithmic scaling with adversary count
Low capital requirements for participation
Guaranteed dispute resolution timeframes
Together, these layers interact through clean, modular interfaces, enabling phased integration and future extensibility. This composable approach aligns with the principles of the OP Stack and supports OPcity’s long-term goal: to contribute a production-ready, decentralized fault proof architecture that strengthens Ethereum’s rollup ecosystem and expands its utility to civic infrastructure, public goods, and autonomous organizations.
Ethereum has long envisioned itself as the World Computer—a shared global platform for registering and verifying the state of public goods, collective decisions, and decentralized markets. By embedding trust directly into code and consensus, Ethereum enables a new kind of programmable legitimacy. Yet for these values to reach governments, organizations, and everyday people, the technology must scale.
Layer 2 (L2) rollups have emerged as a practical solution to Ethereum’s scalability challenges, addressing issues such as cost, capacity, and transaction throughput. Among these, optimistic rollups allow transactions to be executed off-chain while posting minimal data on-chain, thereby preserving decentralization and network security.
The OP Stack, developed by OP Labs and coordinated by the Optimism Foundation, is one of the most widely adopted open-source frameworks for deploying dedicated L2 rollups. Its minimal, and modular architecture allows chains to post periodic state roots to Ethereum (or alternative L1s), relying on optimistic validation. These state roots can be challenged within a seven-day window using Fault Proof mechanisms. A robust and permissionless implementation of these mechanisms is essential to achieving Stage 1 requirements in L2BEAT’s state validation maturity framework.
This research report examines the evolution and implementation of fault proof systems in the OP Stack. It builds on our earlier work, OP City: Research and Optimization of OP Stack Deployments and Canon Fault Proofs VM, and presents findings across three milestones: (1) a comprehensive review of OP Stack upgrades and their impact for fault proofs; (2) a technical analysis of emerging mechanisms like Canon FPVM, opML, oppAI, and DAVE; and (3) a comparative benchmark to assess readiness, performance, and integration paths. Together, these contributions aim to advance the decentralization and resilience of the OP Stack—and, by extension, Ethereum as a truly global computational engine.
This milestone also included a version benchmark of the OP Stack, conducted through a dual-node deployment to evaluate the performance of rollups built from different configurations and protocol versions. The goal was to compare their behavior under equivalent conditions and identify meaningful differences in efficiency and execution. The results of this test were documented in our previous Mirror article, and the full research process, including setup and data, is available in our GitHub repository.
Before the OP Stack was formalized as a unified, modular framework, its architecture evolved through iterative stages that laid the groundwork for scalable Layer 2 execution. The journey began with the Unipig Demo in October 2019, a proof-of-concept developed in collaboration with Uniswap that showcased the power of Optimistic Rollups. It demonstrated a ~10× increase in throughput and significant reductions in gas fees compared to Ethereum mainnet, proving the concept’s viability.
This was followed by the lauch of SNX Testnet in September 2020, which introduced general-purpose smart contract execution on Layer 2. Transactions ran at approximately ~10% of L1’s cost, with improved latency and faster confirmation times.
In January 2021, the network transitioned into its first mainnet phase—often referred to as the OVM Era—with a version of the Optimistic Virtual Machine that relied on a custom Solidity transpiler. While functional, this setup introduced significant inefficiencies: over 25,000 lines of bespoke code, complex tooling, and inflated state transition costs.
By October 2021, Optimism rolled out the EVM Equivalence Upgrade, a major step forward that removed the transpiler, drastically simplified the execution layer, and brought the network into full compatibility with Ethereum tooling. Throughput increased to ~100 TPS, and developer experience improved significantly. Just two months later, in December 2021, the launch of OP mainnet enable public contract deployment and catalyzing adoption by removing whitelists. This major achievment on open access marked a turning point in Optimism’s growth.
The culmination of these early experiments arrived with the Bedrock Upgrade in june 2023, which replaced legacy OVM components with a leaner, modular, and Ethereum-equivalent design. Bedrock slashed code complexity by ~90%, cut transaction costs by ~30%, and boosted throughput to ~450 TPS—ushering in the era of the OP Stack and laying the foundation for a Superchain of standardized, interoperable rollups.

All referenced documents are available in our open repository for transparency and further review: 🔗 OP Stack Protocol Upgrades Review – Zenbit GitHub
Since the formalization of the OP Stack, the protocol has undergone 15 official upgrades, ranging from foundational architectural transitions to fine-grained feature releases across the OP Stack’s modular layers. Our review spanned 62 curated sources, including governance forum proposals, developer documentation, audit reports, and OP Stack specification entries. These materials were analyzed to trace the evolution of the OP Stack’s dispute system—from the launch of permissionless Cannon-based fault proofs in Protocol Upgrade #7, to the infrastructure pre-requisites for multi-threaded MIPS64 fault games in Upgrades #14 and #15. Sources such as governance threads, public audit reports, and design documents not only clarified technical intent, but also allowed us to assess how each upgrade contributed to meeting Stage 1 decentralization criteria from the L2BEAT framework.
Through this analysis, we identified 13 upgrades with direct impact on the settlement layer, including 8 upgrades that significantly advanced the fault proof system —transforming it from a trusted fallback into a fully modular, permissionless verification layer.
Superchain Config laid the groundwork for coordinated multi-chain security by introducing a global SuperchainConfig contract on L1. This enabled the Security Council to pause cross-chain operations via an expanded emergency mechanism, extending control beyond withdrawals to include message relays and token bridges. It also included critical fixes—such as closing reentrancy attack vectors—and enhanced L2 token compatibility through deterministic CREATE2 deployments. While not altering fault proof logic directly, this upgrade provided a crucial fallback security layer for fault proof systems, strengthening the OP Stack’s operational resilience.
The state validation improvements began with Fault Proofs, which marked a foundational shift in the OP Stack’s security model by replacing the previously trusted state root proposer mechanism with a permissionless fault proof system. This upgrade introduced a modular, on-chain binary bisection game powered by the Cannon VM, allowing any participant to propose or challenge state roots. Supporting this dispute flow, key components such as the DisputeGameFactory, AnchorStateRegistry, and bonding mechanisms were deployed. Together, these features enabled the OP Stack to meet the Stage 1 decentralization milestone set by L2BEAT, especially through the integration of a Security Council override to pause withdrawals in emergencies.
Guardian introduced formalized guardian governance. This upgrade extended the Security Council’s role by granting them authority over L2 ProxyAdmin control and emergency withdrawal pausing, providing a critical safeguard during fault proof escalation periods. It was a vital step in building the governance infrastructure required for fault proof decentralization, ensuring that challenge games could proceed trustlessly without introducing systemic risk.
Granite focused on enhancing the protocol infrastructure to support more advanced proof systems. Updates to SystemConfig and dispute game interfaces increased flexibility for challenger clients and made space for introducing alternative VMs, such as MIPS64. These changes future-proofed the OP Stack’s dispute layer, ensuring compatibility with evolving fault proof implementations such MT-Cannon and potential zk variants.
As fault proofs moved into production, Holocene standardized the format for L2-to-L1 outputs, refining how withdrawals and challenge games would be validated across chains. These improvements enhanced compatibility with new proof types and helped streamline the verification logic for post-upgrade withdrawals, an essential update as the system transitioned away from trusted third parties.
In preparation for Ethereum’s Pectra hard fork, Pre-Pectra Readiness introduced important features to support hybrid and forward-compatible proofs. This included access to the L1 Beacon Root, enabling withdrawal verification against Ethereum’s consensus layer, and introduced precompiles like BLS and timestamp access that are critical for next-gen dispute VMs. These changes ensured that fault proofs could continue to operate in a post-Dencun Ethereum environment and opened the door to zk-fault proof hybrids.
This set the stage for Pre-Isthmus, which established the L1 infrastructure for MT-Cannon, a multithreaded, 64-bit evolution of the Cannon VM. Without requiring a hard fork, this upgrade deployed new contracts like OPChainManagerV2, deployed MIPS64.sol, and restructured the fault game logic to support more scalable dispute execution. MT-Cannon could now be tested in production environments in parallel with legacy Cannon, serving as a trial run for the more performant and deterministic VM architecture.
Finally, Isthmus completed the transition by activating MT-Cannon as the canonical fault proof VM. This hard fork integrated Pectra-compatible features, such as support for new precompiles and the inclusion of withdrawalsRoot in L2 block headers, solidifying the OP Stack’s ability to anchor dispute data efficiently. Additionally, it also introduced operator fee fields and formally recognized MIPS64 in dispute logic, ensuring that fault proofs could now run faster, more securely, and with greater parallelism.

A critical component of the OP Stack’s security model is its fault-proof mechanism, which ensures the validity of Layer 2 state transitions through fraud detection rather than pre-execution verification. The initial implementation featured a monolithic Cannon-based fault proof system, which was later restructured to enhance modularity and reduce reliance on Optimism-specific execution logic.
Key developments in fault proofs include:
Cannon's Optimized Proof System: Introduced an approach where the execution client compiles directly into the proof system, simplifying the verification process⁵.
Multi-Client Fault Proofs: A strategic shift toward supporting multiple fault-proof implementations, increasing security resilience and minimizing the risks of single-client reliance⁶.
Introduction of Stage 1 Decentralization: The Guardian upgrade improved security council threshold mechanisms, decentralizing the governance of fault proofs⁷.
Settlement Layer Refinement: Modular proof verification was introduced, allowing future upgrades to transition toward ZK-enabled rollups without disrupting OP Stack’s core execution model⁸.
Modular Fraud Proofs Architecture: A long-term goal of OP Stack fault proofs is modular dispute resolution, allowing new execution clients to integrate their own fraud-proof mechanisms
All referenced documents are available in our open repository for transparency and further review: 🔗 FP research reports – Zenbit GitHub
Fault proofs are a foundational component of optimistic rollups, enabling trustless verification of off-chain state transitions. As the OP Stack advances toward greater decentralization and modularity, the architecture and implementation of fault proof mechanisms have evolved significantly. This milestone documents our research into the key architectures shaping this landscape—beginning with Optimism’s default Canon FPVM, and expanding into cutting-edge systems like opML, oppAI, and Cartesi’s DAVE. Each mechanism offers unique trade-offs between performance, verifiability, and scalability, collectively enriching the design space for secure and efficient Layer 2 systems.
At the core of the OP Stack’s security model lies its Fault Proof system, a cryptoeconomic mechanism that ensures the correctness of off-chain state transitions by enabling anyone to challenge invalid claims posted to Ethereum. This milestone focused on analyzing the evolution of this mechanism, beginning with Optimism’s Canon Fault Proof Virtual Machine (Canon FPVM)—the default system currently securing Layer 2 outputs.
Canon employs a dispute resolution game, where participants engage in an interactive bisection protocol to isolate a single disputed MIPS instruction. The system combines onchain and offchain components: MIPS.sol, a lightweight smart contract that deterministically executes a single instruction, and Cannon, a Go-based offchain VM that computes the state transition trace and generates verifiable proofs. Key elements such as the DisputeGameFactory, AnchorStateRegistry, and OP-Challenger infrastructure coordinate to handle challenges, execute proofs, and enforce results. Memory in Cannon is modeled as a Merkleized 32-bit address space, enabling stateless onchain verification of computation with minimal inputs.
This architecture was tested in production during the release of feature-complete fault proofs on OP Sepolia, and subsequently evolved through multiple protocol upgrades to enhance modularity, precision, and compatibility with Ethereum’s Pectra roadmap. Our technical deep dive revealed strengths—such as deterministic execution, statelessness, and modular game design—but also current limitations, including limited syscall support and high computational overhead.
One of the most forward-looking proposals in fault proof innovation is opML, an optimistic verification system designed to support scalable and verifiable onchain machine learning (ML) inference. Inspired by optimistic rollups, opML replaces expensive zero-knowledge proofs with an interactive fraud-proof process, allowing ML computations to be challenged only when disputed. This approach significantly reduces overhead, making it feasible to run complex models onchain without compromising trust. At its core, opML features a Fraud Proof Virtual Machine (FPVM) optimized for ML workloads, equipped with Merkleized memory and a segmented layout that separates code, input/output buffers, oracle data, and model parameters. This structure supports a full 32-bit address space and deterministic execution, essential for cryptographic verifiability.
The opML architecture consists of four integrated components: the FPVM, a dual-compilation Machine Learning Engine (MLE), an interactive dispute game, and the protocol layer. The MLE is particularly innovative, compiling the same ML source code into two targets: one optimized for high-speed native execution (with GPU/CUDA support), and another for verifiable execution on the FPVM, using fixed-point arithmetic to preserve determinism. With a typical use, trusted actors submit computation results offchain, while verifiers can trigger a two-phase interactive dispute. The first phase isolates disputed computation nodes at the graph level with semi-native execution, while the second drills down into low-level FPVM instructions to prove or disprove correctness.
This multi-phase bisection protocol improves upon Optimism’s single-phase approach by tailoring fraud proof resolution to ML-specific workflows, striking a balance between performance and trust minimization. Disputes are resolved onchain via MIPS-like instructions executed within the FPVM, with proofs derived offchain using opML’s Merkleized memory model. Security is guaranteed through an AnyTrust model, where only one honest validator is needed to contest false claims. Submitters and challengers are incentivized with crypto-economic stakes and penalties to ensure honest behavior.
To further enhance scalability, opML incorporates performance optimizations like lazy loading—loading only required model segments into memory—and semi-native execution, where only disputed segments revert to verifiable computation. This approach enables opML to support large-scale models (e.g. 7B LLaMA) while maintaining verifiability when needed. On the protocol layer, smart contract interfaces manage submission, challenge periods, and dispute resolution workflows, enabling seamless integration with onchain systems. While opML still faces constraints—such as fixed finality windows and memory limitations—it presents a robust pathway for integrating verifiable ML into rollup environments, aligning with the broader goals of modular and decentralized computation.
The oppAI framework introduces a novel hybrid architecture that merges the computational efficiency of Optimistic Machine Learning (opML) with the strong privacy guarantees of Zero-Knowledge Machine Learning (zkML). Designed to enable secure, verifiable, and privacy-preserving AI inference on blockchain systems, oppAI addresses the core tradeoff in onchain AI: achieving both performance and confidentiality. By allowing developers to selectively partition AI models into components executed via optimistic or zero-knowledge verification, oppAI provides a flexible system tailored to diverse privacy and cost constraints.
At the core of oppAI lies a dual execution model. Non-sensitive or performance-critical model components are processed via opML’s Fraud Proof Virtual Machine (FPVM), while sensitive logic or proprietary model weights are processed under zkML, using zero-knowledge proofs such as zk-SNARKs. This architecture enables the same AI model to achieve high performance where needed, and high privacy where necessary. The execution workflow begins with a partitioned AI model—e.g., f₁ᵒᵖ, f₁ᶻᵏ, ..., fₙᵒᵖ, fₙᶻᵏ—where each submodel is designated for either optimistic or zk execution. After processing, the results are submitted to an onchain verifier, and a challenge-response game can be initiated if needed.
The oppAI system architecture builds on opML’s foundation and extends it with additional zkML capabilities. It includes components such as:
The FPVM (adapted from opML) for step-by-step verifiability in optimistic disputes;
A dual-compiled Machine Learning Engine, supporting both native and FPVM-compatible execution;
An interactive dispute resolution game for verifying opML results;
A zkML Prover that generates ZKPs for private components, alongside an onchain Verifier contract to check them.
This design offers direct applications whithin fault proof systems like those used in the OP Stack. First, oppAI enables privacy-preserving fault proofs, allowing sensitive data (e.g. user inputs or proprietary models) to be verified without full exposure. Second, it proposes a hybrid verification scheme where only the most privacy-sensitive operations incur zk costs, while the rest benefit from lower overhead optimistic execution. This creates a more efficient yet secure fraud proof system, especially valuable for increasingly complex computations such as AI oracles or ML-based coordination mechanisms in DAOs and L2 governance.
From a security standpoint, oppAI adopts an AnyTrust model, requiring only a single honest verifier to guarantee system integrity. Its economic security model disincentivizes attacks through dynamic inference pricing, calculated to keep the cost of reconstructing private models prohibitively high. Notably, the system allows tuning the ratio between zkML and opML (represented by p) to balance the tradeoff between privacy and efficiency. Performance benchmarks confirm that reducing zkML usage (lower p) significantly lowers computational overhead while still enabling verification of critical model segments like attention layers in large language models (e.g. Stable Diffusion, LLaMA).
In practical terms, oppAI’s capabilities significantly the scope of OP Stack-based chains by allowing:
Secure AI oracles, which deliver verifiable predictions to smart contracts without revealing internal logic;
Privacy-enhanced onchain inference, protecting model IP while maintaining fraud-proof guarantees;
Support for high-complexity computation, including federated ML, game theory agents, and dynamic governance models.
Despite its advantages, oppAI introduces new design and implementation challenges. Determining the optimal partitioning of model components require privacy demands interdisciplinary knowledge of ML and blockchain. Furthermore, managing performance tradeoffs and adapting to rapidly evolving zkML and opML ecosystems require continual iteration. Nonetheless, oppAI sets a strong precedent for modular, privacy-aware fault proof design—bridging efficient onchain computation with the next generation of decentralized AI applications.
DAVE (a name, not an acronym) is Cartesi’s advanced fraud-proof protocol, designed to strengthen decentralization, liveness, and efficiency in blockchain verification. It introduces a permissionless, tournament-style dispute mechanism, offering a compelling alternative to Optimism’s Fault Proofs and Arbitrum’s BoLD.
At its core, DAVE leverages Permissionless Refereed Tournaments (PRT)—a novel framework resitant to Sybil attacks, with constant hardware and bond requirements, and provides exponential resource advantages for honest actors. This design allows that a single honest validator can defend the network—even against coordinated adversaries—at minimal cost.
DAVE operates on top the Cartesi Machine, a deterministic RISC-V emulator that separates execution logic into two layers: a micro-architecture in Solidity and a high-performance offchain computation layer. This dual-layer architecture ensures execution integrity while supporting instructions without inflating the onchain footprint.
Unlike traditional fault proof systems, DAVE employs a tournament-style challenge mechanism, where validators resolve disputes through logarithmically-scaling games. These disputes typically conclude within 2–5 challenge periods, significantly reducing the overhead seen in earlier systems like Cartesi’s own PRT (which could take up to 20 weeks).
Compared to other systems, DAVE offers notable advantages:
Versus Optimism’s OPFP, it provides stronger Sybil resistance, faster dispute resolution, and lower validator costs.
Compared to Arbitrum’s BoLD, DAVE maintains similar security guarantees significantly reduced bond requirements (~3 ETH vs. 3,600 ETH).
Internally, it improves liveness and reduces delay over Cartesi’s prior models while keeping decentralization intact.
DAVE’s flexible architecture allows it to serve both rollups with continuous input processing and compute-focused systems. While it currently relies on the Cartesi Machine, its execution environment agnosticism opens paths for integration with other rollup ecosystems.
All referenced documents are available in our open repository for transparency and further review: 🔗 FP research reports – Zenbit GitHub
Originally titled “2. Research on oppAI compatibility with the OP Stack,” this milestone was later renamed to “2. FPVM Comparative Analysis” to better reflect the broader scope of our investigation. While the initial focus centered on oppAI and opML, our research expanded to include a full evaluation of Optimism’s Canon FPVM and Cartesi’s DAVE—both foundational and alternative approaches to fault proof architecture. This milestone presents a structured comparison of these four mechanisms, each tackling a different facet of the fault proof challenge, from deterministic execution and computational efficiency to privacy-preserving inference and decentralized dispute resolution.
Our primary objective is to analyze the tradeoffs in architecture, performance, security, and accessibility, and to propose integration pathways that strengthen the OP Stack’s fault proof system. This comparative research is intended not only as a conceptual evaluation but as a blueprint for the OPcity prototype, guiding future technical contributions to the OP Stack and aligning with the broader mission of building decentralized, modular, and verifiable infrastructure.
These systems represent three main verification paradigms. Optimism’s Canon FPVM relies on deterministic execution using a lightweight MIPS VM and binary bisection games. opML enhances this by integrating ML-based prediction to streamline verification. oppAI extends opML by adding selective zkML to protect privacy-sensitive components. Meanwhile, Cartesi’s DAVE diverges with a tournament-based architecture running on a full RISC-V emulator, prioritizing permissionless participation and Sybil resistance.
1. Execution Environment:
Optimism uses a MIPS-based VM for its simplicity and determinism
Cartesi employs a more powerful RISC-V emulator with two-layer implementation
opML works within existing VMs but adds ML capabilities
oppAI partitions execution between zkML and opML components
2. Verification Approach:
Optimism uses a traditional bisection protocol to narrow disputes
opML enhances verification with machine learning predictions
oppAI selectively applies zero-knowledge proofs for privacy-critical components
DAVE implements tournament-style verification to improve decentralization
3. State Representation:
All systems use Merkle trees for state representation, but with different implementations
Optimism and Cartesi focus on complete state transitions
opML and oppAI allow for partial verification of specific components

While Canon FPVM sets a reliable baseline, its simplicity constrains scalability. opML reduces computational load via ML-enhanced execution, offering performance improvements without altering core architecture. oppAI introduces more variable overhead due to zkML, but allows developers to balance privacy and efficiency by configuring which sub-models require zero-knowledge proofs. DAVE’s structure ensures that honest participants incur low costs even in worst-case scenarios, making it well-suited for decentralized validator environments.
1. Computational Requirements:
Optimism's Canon requires full execution of disputed transactions
opML reduces computation through predictive models
oppAI optimizes by applying heavy computation only to privacy-sensitive parts
DAVE minimizes honest validator costs regardless of adversary resources
2. Verification Time:
Optimism's verification time scales with computation complexity
opML potentially accelerates verification through ML predictions
oppAI's verification time varies based on privacy requirements
DAVE guarantees resolution in 2-5 challenge periods regardless of adversary count
3. Resource Asymmetry:
DAVE provides the strongest resource asymmetry, giving honest validators exponential advantage
opML offers efficiency advantages through prediction
oppAI balances resources based on privacy needs
Optimism requires similar resources for all participants
All systems adhere to a 1-of-N honesty assumption—where a single honest verifier can enforce correctness. Canon and opML rely on traditional bond-based incentives. oppAI innovates by making attack costs dynamic, based on the proportion of the model using zkML. DAVE provides the most robust Sybil resistance and censorship resilience, minimizing reliance on high bonds by leveraging logarithmic dispute scaling and permissionless participation.
Adversary Models:
All systems use a 1-of-N security model where a single honest validator can enforce correctness
DAVE specifically addresses Sybil attacks through its tournament structure
oppAI adds privacy considerations to the security model
opML potentially improves detection of sophisticated attacks
Economic Security:
Optimism relies on bonds to ensure honest behavior
DAVE minimizes bond requirements while maintaining security
oppAI introduces variable costs based on privacy needs
opML maintains the economic model of its host system
Censorship Resistance:
DAVE explicitly addresses censorship, requiring censorship for more than one challenge period to break consensus
Other systems have less explicit censorship resistance guarantees
Cartesi’s DAVE leads in terms of decentralization, with its low capital requirements and open validator access encouraging broader participation. Canon maintains moderate requirements, while opML and oppAI, despite their technical innovation, face participation challenges due to higher complexity and expertise requirements—particularly in machine learning and zero-knowledge systems.
Capital Requirements:
DAVE explicitly minimizes capital requirements (3 ETH vs. 3,600 ETH for Arbitrum's BoLD)
Optimism has moderate requirements
oppAI and opML inherit requirements from their base systems
Technical Barriers:
opML and oppAI introduce additional complexity through ML components
DAVE and Optimism have more straightforward verification processes
All systems require some technical expertise to participate effectively
Validator Diversity:
DAVE's low capital requirements potentially enable greater validator diversity
ML-based approaches might limit participation to those with ML expertise
All systems benefit from diverse validator sets for security
Several integration pathways could merge the best features of these mechanisms into a next-generation fault proof system. For example, pairing Canon FPVM with opML’s prediction layer could optimize dispute prioritization without modifying the core VM. Adding oppAI’s privacy-preserving zkML submodules would extend Canon’s use cases to sensitive applications like AI oracles. Most transformative would be adopting DAVE’s tournament-style dispute resolution, enabling permissionless participation with lower bonds and faster resolution—making Canon-based fault proofs more robust and accessible.
More ambitious would be a multi-system integration—combining Canon’s deterministic foundation, opML’s efficiency, oppAI’s privacy, and DAVE’s decentralization. While technically demanding, this could yield a modular fault proof architecture tailored for emerging rollup demands, from governance security to machine-learning-powered agents
Potential Integration:
Enhance Canon FPVM with ML-based prediction for faster verification
Use ML to identify likely dispute points before full verification
Maintain Canon's deterministic execution while improving efficiency
Implementation Approach:
Add ML prediction layer to Canon without modifying core execution
Train models on historical dispute patterns
Use predictions to prioritize verification steps
Benefits:
Faster dispute resolution
Reduced computational overhead
Maintained security guarantees
Potential Integration:
Add privacy capabilities to Canon through selective zkML
Partition sensitive computations for privacy protection
Maintain efficiency for non-sensitive operations
Implementation Approach:
Implement model partitioning within Canon
Add zkML verification for privacy-critical components
Develop clear interfaces between zkML and opML parts
Benefits:
Enhanced privacy for sensitive operations
Maintained efficiency for standard operations
New use cases for private computation
Potential Integration:
Adopt DAVE's tournament approach for Canon's dispute resolution
Maintain Canon's execution environment while improving the challenge mechanism
Reduce capital requirements for participation
Implementation Approach:
Implement tournament-style challenges within Optimism's framework
Adapt Canon to work with logarithmic dispute resolution
Maintain compatibility with existing Optimism deployments
Benefits:
Improved decentralization through lower capital requirements
Enhanced resistance to Sybil attacks
Faster dispute resolution
Comprehensive Approach:
Combine elements from all four systems for a next-generation fault proof system
Use Canon's deterministic execution as the foundation
Enhance with opML's efficiency improvements
Add oppAI's privacy capabilities for sensitive operations
Implement DAVE's tournament structure for dispute resolution
Implementation Challenges:
Complexity of combining multiple novel approaches
Ensuring security guarantees are maintained
Managing the increased technical barriers to participation
Potential Benefits:
Best-in-class performance across all metrics
Support for diverse use cases including privacy-sensitive applications
Improved accessibility and decentralization
While Canon FPVM provides the deterministic foundation for state validation in the OP Stack, opML introduces machine learning-driven optimizations that reduce overhead and improve execution speed. oppAI extends this with a hybrid zkML/opML approach that brings selective privacy to verifiable AI computations, and DAVE’s tournament-based protocol raises the bar for decentralization, liveness, and Sybil resistance.
Rather than selecting a single solution, our analysis supports a modular integration of these technologies as the optimal path forward. By synthesizing their complementary strengths and mitigating their individual tradeoffs, we envision a next-generation fault proof system that is efficient, private, accessible, and secure—one capable of meeting the evolving needs of Ethereum rollups and complex decentralized applications.
This vision forms the design framework for the OPcity prototype, a research-driven implementation of an advanced fault proof system built on the OP Stack. Our blueprint is structured across four interoperable layers:
1. Execution Layer: Modified Canon FPVM with RISC-V compatibility
Deterministic execution environment based on Canon FPVM
Extended instruction set supporting both MIPS and RISC-V operations
Unified state representation using enhanced Merkle trees
Compatibility layer for existing applications
2. Intelligence Layer: ML-enhanced prediction and optimization engine
Predictive models for identifying potential dispute points
Optimization algorithms for efficient execution paths
Lightweight DNN library for in-VM machine learning
Training framework for continuous improvement
3. Privacy Layer: Selective zero-knowledge proof system
Model partitioning framework for separating privacy-sensitive components
Zero-knowledge proof generation for protected components
Efficient verification of ZK proofs on-chain
Economic security model for privacy protection
4. Verification Layer: Tournament-based dispute resolution system
Permissionless tournament structure for dispute resolution
Logarithmic scaling with adversary count
Low capital requirements for participation
Guaranteed dispute resolution timeframes
Together, these layers interact through clean, modular interfaces, enabling phased integration and future extensibility. This composable approach aligns with the principles of the OP Stack and supports OPcity’s long-term goal: to contribute a production-ready, decentralized fault proof architecture that strengthens Ethereum’s rollup ecosystem and expands its utility to civic infrastructure, public goods, and autonomous organizations.
No comments yet